Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition
Events
Loading … Spinner

Mendeley | Further Information

{"title"=>"Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition", "type"=>"journal", "authors"=>[{"first_name"=>"Charles F.", "last_name"=>"Cadieu", "scopus_author_id"=>"22233507300"}, {"first_name"=>"Ha", "last_name"=>"Hong", "scopus_author_id"=>"56121769900"}, {"first_name"=>"Daniel L K", "last_name"=>"Yamins", "scopus_author_id"=>"8276525500"}, {"first_name"=>"Nicolas", "last_name"=>"Pinto", "scopus_author_id"=>"35273071400"}, {"first_name"=>"Diego", "last_name"=>"Ardila", "scopus_author_id"=>"56458163000"}, {"first_name"=>"Ethan A.", "last_name"=>"Solomon", "scopus_author_id"=>"56200522300"}, {"first_name"=>"Najib J.", "last_name"=>"Majaj", "scopus_author_id"=>"6602251596"}, {"first_name"=>"James J.", "last_name"=>"DiCarlo", "scopus_author_id"=>"7006387907"}], "year"=>2014, "source"=>"PLoS Computational Biology", "identifiers"=>{"pui"=>"601020018", "sgr"=>"84919607718", "pmid"=>"25521294", "scopus"=>"2-s2.0-84919607718", "isbn"=>"1553-7358", "arxiv"=>"1406.3284", "doi"=>"10.1371/journal.pcbi.1003963", "issn"=>"15537358"}, "id"=>"c08c3cb8-b1aa-3e11-9039-c435e2e15a0c", "abstract"=>"The primate visual system achieves remarkable visual object recognition performance even in brief presentations and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations such as the amount of noise, the number of neural recording sites, and the number trials, and computational limitations such as the complexity of the decoding classifier and the number of classifier training examples. In this work we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of \"kernel analysis\" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.", "link"=>"http://www.mendeley.com/research/deep-neural-networks-rival-representation-primate-it-cortex-core-visual-object-recognition", "reader_count"=>503, "reader_count_by_academic_status"=>{"Unspecified"=>20, "Professor > Associate Professor"=>18, "Researcher"=>96, "Student > Doctoral Student"=>18, "Student > Ph. D. Student"=>174, "Student > Postgraduate"=>14, "Student > Master"=>81, "Other"=>16, "Student > Bachelor"=>47, "Lecturer"=>4, "Professor"=>15}, "reader_count_by_user_role"=>{"Unspecified"=>20, "Professor > Associate Professor"=>18, "Researcher"=>96, "Student > Doctoral Student"=>18, "Student > Ph. D. Student"=>174, "Student > Postgraduate"=>14, "Student > Master"=>81, "Other"=>16, "Student > Bachelor"=>47, "Lecturer"=>4, "Professor"=>15}, "reader_count_by_subject_area"=>{"Unspecified"=>36, "Agricultural and Biological Sciences"=>79, "Philosophy"=>2, "Business, Management and Accounting"=>2, "Chemical Engineering"=>1, "Chemistry"=>1, "Computer Science"=>162, "Earth and Planetary Sciences"=>3, "Economics, Econometrics and Finance"=>1, "Engineering"=>59, "Biochemistry, Genetics and Molecular Biology"=>1, "Mathematics"=>13, "Medicine and Dentistry"=>8, "Neuroscience"=>70, "Pharmacology, Toxicology and Pharmaceutical Science"=>1, "Physics and Astronomy"=>15, "Psychology"=>49}, "reader_count_by_subdiscipline"=>{"Medicine and Dentistry"=>{"Medicine and Dentistry"=>8}, "Physics and Astronomy"=>{"Physics and Astronomy"=>15}, "Psychology"=>{"Psychology"=>49}, "Mathematics"=>{"Mathematics"=>13}, "Unspecified"=>{"Unspecified"=>36}, "Pharmacology, Toxicology and Pharmaceutical Science"=>{"Pharmacology, Toxicology and Pharmaceutical Science"=>1}, "Chemical Engineering"=>{"Chemical Engineering"=>1}, "Engineering"=>{"Engineering"=>59}, "Chemistry"=>{"Chemistry"=>1}, "Neuroscience"=>{"Neuroscience"=>70}, "Earth and Planetary Sciences"=>{"Earth and Planetary Sciences"=>3}, "Economics, Econometrics and Finance"=>{"Economics, Econometrics and Finance"=>1}, "Agricultural and Biological Sciences"=>{"Agricultural and Biological Sciences"=>79}, "Computer Science"=>{"Computer Science"=>162}, "Business, Management and Accounting"=>{"Business, Management and Accounting"=>2}, "Biochemistry, Genetics and Molecular Biology"=>{"Biochemistry, Genetics and Molecular Biology"=>1}, "Philosophy"=>{"Philosophy"=>2}}, "reader_count_by_country"=>{"Hong Kong"=>1, "United States"=>19, "Japan"=>4, "United Kingdom"=>6, "Switzerland"=>2, "Russia"=>1, "Spain"=>2, "Canada"=>1, "Netherlands"=>2, "Korea (South)"=>1, "Italy"=>2, "Slovakia"=>1, "France"=>1, "Germany"=>2}, "group_count"=>22}

CrossRef

Scopus | Further Information

{"@_fa"=>"true", "link"=>[{"@_fa"=>"true", "@ref"=>"self", "@href"=>"https://api.elsevier.com/content/abstract/scopus_id/84919607718"}, {"@_fa"=>"true", "@ref"=>"author-affiliation", "@href"=>"https://api.elsevier.com/content/abstract/scopus_id/84919607718?field=author,affiliation"}, {"@_fa"=>"true", "@ref"=>"scopus", "@href"=>"https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84919607718&origin=inward"}, {"@_fa"=>"true", "@ref"=>"scopus-citedby", "@href"=>"https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=84919607718&origin=inward"}], "prism:url"=>"https://api.elsevier.com/content/abstract/scopus_id/84919607718", "dc:identifier"=>"SCOPUS_ID:84919607718", "eid"=>"2-s2.0-84919607718", "dc:title"=>"Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition", "dc:creator"=>"Cadieu C.", "prism:publicationName"=>"PLoS Computational Biology", "prism:issn"=>"1553734X", "prism:eIssn"=>"15537358", "prism:volume"=>"10", "prism:issueIdentifier"=>"12", "prism:pageRange"=>nil, "prism:coverDate"=>"2014-01-01", "prism:coverDisplayDate"=>"1 December 2014", "prism:doi"=>"10.1371/journal.pcbi.1003963", "citedby-count"=>"176", "affiliation"=>[{"@_fa"=>"true", "affilname"=>"McGovern Institute for Brain Research", "affiliation-city"=>"Cambridge", "affiliation-country"=>"United States"}], "pubmed-id"=>"25521294", "prism:aggregationType"=>"Journal", "subtype"=>"ar", "subtypeDescription"=>"Article", "source-id"=>"4000151810", "openaccess"=>"1", "openaccessFlag"=>true}

Article Coverage

Facebook

  • {"url"=>"http%3A%2F%2Fjournals.plos.org%2Fploscompbiol%2Farticle%3Fid%3D10.1371%252Fjournal.pcbi.1003963", "share_count"=>5, "like_count"=>0, "comment_count"=>0, "click_count"=>0, "total_count"=>5}

Twitter

Counter

  • {"month"=>"12", "year"=>"2014", "pdf_views"=>"956", "xml_views"=>"29", "html_views"=>"7026"}
  • {"month"=>"1", "year"=>"2015", "pdf_views"=>"348", "xml_views"=>"6", "html_views"=>"2758"}
  • {"month"=>"2", "year"=>"2015", "pdf_views"=>"127", "xml_views"=>"0", "html_views"=>"554"}
  • {"month"=>"3", "year"=>"2015", "pdf_views"=>"89", "xml_views"=>"0", "html_views"=>"457"}
  • {"month"=>"4", "year"=>"2015", "pdf_views"=>"98", "xml_views"=>"1", "html_views"=>"536"}
  • {"month"=>"5", "year"=>"2015", "pdf_views"=>"91", "xml_views"=>"0", "html_views"=>"439"}
  • {"month"=>"6", "year"=>"2015", "pdf_views"=>"70", "xml_views"=>"0", "html_views"=>"411"}
  • {"month"=>"7", "year"=>"2015", "pdf_views"=>"79", "xml_views"=>"0", "html_views"=>"283"}
  • {"month"=>"8", "year"=>"2015", "pdf_views"=>"84", "xml_views"=>"0", "html_views"=>"363"}
  • {"month"=>"9", "year"=>"2015", "pdf_views"=>"85", "xml_views"=>"0", "html_views"=>"399"}
  • {"month"=>"10", "year"=>"2015", "pdf_views"=>"84", "xml_views"=>"0", "html_views"=>"316"}
  • {"month"=>"11", "year"=>"2015", "pdf_views"=>"66", "xml_views"=>"1", "html_views"=>"296"}
  • {"month"=>"12", "year"=>"2015", "pdf_views"=>"56", "xml_views"=>"0", "html_views"=>"212"}
  • {"month"=>"1", "year"=>"2016", "pdf_views"=>"91", "xml_views"=>"0", "html_views"=>"1526"}
  • {"month"=>"2", "year"=>"2016", "pdf_views"=>"62", "xml_views"=>"0", "html_views"=>"290"}
  • {"month"=>"3", "year"=>"2016", "pdf_views"=>"59", "xml_views"=>"0", "html_views"=>"272"}
  • {"month"=>"4", "year"=>"2016", "pdf_views"=>"91", "xml_views"=>"0", "html_views"=>"236"}
  • {"month"=>"5", "year"=>"2016", "pdf_views"=>"86", "xml_views"=>"0", "html_views"=>"237"}
  • {"month"=>"6", "year"=>"2016", "pdf_views"=>"62", "xml_views"=>"0", "html_views"=>"202"}
  • {"month"=>"7", "year"=>"2016", "pdf_views"=>"64", "xml_views"=>"0", "html_views"=>"196"}
  • {"month"=>"8", "year"=>"2016", "pdf_views"=>"67", "xml_views"=>"0", "html_views"=>"204"}
  • {"month"=>"9", "year"=>"2016", "pdf_views"=>"91", "xml_views"=>"0", "html_views"=>"307"}
  • {"month"=>"10", "year"=>"2016", "pdf_views"=>"87", "xml_views"=>"0", "html_views"=>"412"}
  • {"month"=>"11", "year"=>"2016", "pdf_views"=>"46", "xml_views"=>"1", "html_views"=>"413"}
  • {"month"=>"12", "year"=>"2016", "pdf_views"=>"84", "xml_views"=>"0", "html_views"=>"519"}
  • {"month"=>"1", "year"=>"2017", "pdf_views"=>"91", "xml_views"=>"3", "html_views"=>"566"}
  • {"month"=>"2", "year"=>"2017", "pdf_views"=>"85", "xml_views"=>"1", "html_views"=>"394"}
  • {"month"=>"3", "year"=>"2017", "pdf_views"=>"85", "xml_views"=>"0", "html_views"=>"419"}
  • {"month"=>"4", "year"=>"2017", "pdf_views"=>"81", "xml_views"=>"4", "html_views"=>"435"}
  • {"month"=>"5", "year"=>"2017", "pdf_views"=>"75", "xml_views"=>"2", "html_views"=>"475"}
  • {"month"=>"6", "year"=>"2017", "pdf_views"=>"89", "xml_views"=>"0", "html_views"=>"398"}
  • {"month"=>"7", "year"=>"2017", "pdf_views"=>"78", "xml_views"=>"0", "html_views"=>"330"}
  • {"month"=>"8", "year"=>"2017", "pdf_views"=>"79", "xml_views"=>"1", "html_views"=>"272"}
  • {"month"=>"9", "year"=>"2017", "pdf_views"=>"85", "xml_views"=>"1", "html_views"=>"384"}
  • {"month"=>"10", "year"=>"2017", "pdf_views"=>"74", "xml_views"=>"1", "html_views"=>"461"}
  • {"month"=>"11", "year"=>"2017", "pdf_views"=>"89", "xml_views"=>"2", "html_views"=>"393"}
  • {"month"=>"12", "year"=>"2017", "pdf_views"=>"70", "xml_views"=>"2", "html_views"=>"419"}
  • {"month"=>"1", "year"=>"2018", "pdf_views"=>"75", "xml_views"=>"0", "html_views"=>"277"}
  • {"month"=>"2", "year"=>"2018", "pdf_views"=>"89", "xml_views"=>"2", "html_views"=>"173"}
  • {"month"=>"3", "year"=>"2018", "pdf_views"=>"63", "xml_views"=>"0", "html_views"=>"150"}
  • {"month"=>"4", "year"=>"2018", "pdf_views"=>"87", "xml_views"=>"0", "html_views"=>"186"}
  • {"month"=>"5", "year"=>"2018", "pdf_views"=>"89", "xml_views"=>"3", "html_views"=>"135"}
  • {"month"=>"6", "year"=>"2018", "pdf_views"=>"64", "xml_views"=>"0", "html_views"=>"114"}
  • {"month"=>"7", "year"=>"2018", "pdf_views"=>"72", "xml_views"=>"5", "html_views"=>"151"}
  • {"month"=>"8", "year"=>"2018", "pdf_views"=>"78", "xml_views"=>"1", "html_views"=>"123"}
  • {"month"=>"9", "year"=>"2018", "pdf_views"=>"79", "xml_views"=>"0", "html_views"=>"133"}
  • {"month"=>"10", "year"=>"2018", "pdf_views"=>"88", "xml_views"=>"1", "html_views"=>"132"}
  • {"month"=>"11", "year"=>"2018", "pdf_views"=>"110", "xml_views"=>"1", "html_views"=>"178"}
  • {"month"=>"12", "year"=>"2018", "pdf_views"=>"94", "xml_views"=>"1", "html_views"=>"128"}
  • {"month"=>"1", "year"=>"2019", "pdf_views"=>"60", "xml_views"=>"1", "html_views"=>"107"}
  • {"month"=>"2", "year"=>"2019", "pdf_views"=>"84", "xml_views"=>"0", "html_views"=>"111"}
  • {"month"=>"3", "year"=>"2019", "pdf_views"=>"89", "xml_views"=>"4", "html_views"=>"121"}
  • {"month"=>"4", "year"=>"2019", "pdf_views"=>"107", "xml_views"=>"1", "html_views"=>"126"}
  • {"month"=>"5", "year"=>"2019", "pdf_views"=>"76", "xml_views"=>"1", "html_views"=>"137"}
  • {"month"=>"6", "year"=>"2019", "pdf_views"=>"71", "xml_views"=>"0", "html_views"=>"112"}
  • {"month"=>"7", "year"=>"2019", "pdf_views"=>"72", "xml_views"=>"0", "html_views"=>"105"}
  • {"month"=>"8", "year"=>"2019", "pdf_views"=>"65", "xml_views"=>"0", "html_views"=>"98"}
  • {"month"=>"9", "year"=>"2019", "pdf_views"=>"58", "xml_views"=>"0", "html_views"=>"129"}
  • {"month"=>"10", "year"=>"2019", "pdf_views"=>"110", "xml_views"=>"1", "html_views"=>"162"}
  • {"month"=>"11", "year"=>"2019", "pdf_views"=>"75", "xml_views"=>"0", "html_views"=>"166"}
  • {"month"=>"12", "year"=>"2019", "pdf_views"=>"43", "xml_views"=>"1", "html_views"=>"77"}

Figshare

  • {"files"=>["https://ndownloader.figshare.com/files/1846682"], "description"=>"<p>Plotting conventions are the same as in <a href=\"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi-1003963-g002\" target=\"_blank\">Fig. 2</a>. Multi-unit analysis is presented in panel A and single-unit analysis in B. Note that the model representations have been modified such that they are both subsampled and noisy versions of those analyzed in <a href=\"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi-1003963-g002\" target=\"_blank\">Fig. 2</a> and this modification is indicated by the symbol for noise matched to the multi-unit IT cortex sample and by the symbol for noise matched to the single-unit IT cortex sample. To correct for sampling bias, the multi-unit analysis uses 80 samples, either 80 neural multi-units from V4 or IT cortex, or 80 features from the model representations, and the single-unit analysis uses 40 samples. To correct for experimental and intrinsic neural noise, we added noise to the subsampled model representation (no additional noise is added to the neural representations) that is commensurate to the observed noise from the IT measurements. Note that we observed similar noise between the V4 and IT Cortex samples and we do not attempt to correct the V4 cortex sample of the noise observed in the IT cortex sample. We observed substantially higher noise levels in IT single-unit recordings than multi-unit recordings due to both higher trial-to-trial variability and more trials for the multi-unit recordings. All model representations suffer decreases in accuracy after correcting for sampling and adding noise (compare absolute precision values to <a href=\"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi-1003963-g002\" target=\"_blank\">Fig. 2</a>). All three deep neural networks perform significantly better than the V4 cortex sample. For the multi-unit analysis (A), IT cortex sample achieves high precision and is only matched in performance by the Zeiler & Fergus 2013 representation. For the single-unit analysis (B), both the Krizhevsky et al. 2012 and the Zeiler & Fergus 2013 representations surpass the IT representational performance.</p>", "links"=>[], "tags"=>["object recognition", "object recognition performance", "dnn", "limitation", "classifier training examples", "object recognition task", "Deep Neural Networks Rival", "model"], "article_id"=>1274061, "categories"=>["Uncategorised"], "users"=>["Charles F. Cadieu", "Ha Hong", "Daniel L. K. Yamins", "Nicolas Pinto", "Diego Ardila", "Ethan A. Solomon", "Najib J. Majaj", "James J. DiCarlo"], "doi"=>"https://dx.doi.org/10.1371/journal.pcbi.1003963.g003", "stats"=>{"downloads"=>0, "page_views"=>31, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Kernel_analysis_curves_of_sample_and_noise_matched_neural_and_model_representations_/1274061", "title"=>"Kernel analysis curves of sample and noise matched neural and model representations.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2014-12-18 02:42:30"}
  • {"files"=>["https://ndownloader.figshare.com/files/1846681"], "description"=>"<p>Precision, one minus loss (), is plotted against complexity, the inverse of the regularization parameter (). Shaded regions indicate the standard deviation of the measurement over image set randomizations, which are often smaller than the line thickness. The Zeiler & Fergus 2013, Krizhevsky et al. 2012 and HMO models are all hierarchical deep neural networks. HMAX <a href=\"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi.1003963-Mutch1\" target=\"_blank\">[41]</a> is a model of the ventral visual stream and the V1-like <a href=\"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi.1003963-Pinto2\" target=\"_blank\">[35]</a> and V2-like <a href=\"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi.1003963-Freeman1\" target=\"_blank\">[42]</a> models attempt to replicate response properties of visual areas V1 and V2, respectively. These analyses indicate that the task we are measuring proves difficult for V1-like and V2-like models, with these models barely moving from 0.0 precision for all levels of complexity. Furthermore, the HMAX model, which has previously been shown to perform relatively well on object recognition tasks, performs only marginally better. Each of the remaining deep neural network models performs drastically better, with the Zeiler & Fergus 2013 model performing best for all levels of complexity. These results indicate that the visual object recognition task we evaluate is computationally challenging for all but the latest deep neural networks.</p>", "links"=>[], "tags"=>["object recognition", "object recognition performance", "dnn", "limitation", "classifier training examples", "object recognition task", "Deep Neural Networks Rival", "model"], "article_id"=>1274060, "categories"=>["Uncategorised"], "users"=>["Charles F. Cadieu", "Ha Hong", "Daniel L. K. Yamins", "Nicolas Pinto", "Diego Ardila", "Ethan A. Solomon", "Najib J. Majaj", "James J. DiCarlo"], "doi"=>"https://dx.doi.org/10.1371/journal.pcbi.1003963.g002", "stats"=>{"downloads"=>1, "page_views"=>18, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Kernel_analysis_curves_of_model_representations_/1274060", "title"=>"Kernel analysis curves of model representations.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2014-12-18 02:42:30"}
  • {"files"=>["https://ndownloader.figshare.com/files/1846689", "https://ndownloader.figshare.com/files/1846690", "https://ndownloader.figshare.com/files/1846691", "https://ndownloader.figshare.com/files/1846692", "https://ndownloader.figshare.com/files/1846693", "https://ndownloader.figshare.com/files/1846694"], "description"=>"<div><p>The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.</p></div>", "links"=>[], "tags"=>["object recognition", "object recognition performance", "dnn", "limitation", "classifier training examples", "object recognition task", "Deep Neural Networks Rival", "model"], "article_id"=>1274068, "categories"=>["Uncategorised"], "users"=>["Charles F. Cadieu", "Ha Hong", "Daniel L. K. Yamins", "Nicolas Pinto", "Diego Ardila", "Ethan A. Solomon", "Najib J. Majaj", "James J. DiCarlo"], "doi"=>["https://dx.doi.org/10.1371/journal.pcbi.1003963.s001", "https://dx.doi.org/10.1371/journal.pcbi.1003963.s002", "https://dx.doi.org/10.1371/journal.pcbi.1003963.s003", "https://dx.doi.org/10.1371/journal.pcbi.1003963.s004", "https://dx.doi.org/10.1371/journal.pcbi.1003963.s005", "https://dx.doi.org/10.1371/journal.pcbi.1003963.s006"], "stats"=>{"downloads"=>24, "page_views"=>47, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Deep_Neural_Networks_Rival_the_Representation_of_Primate_IT_Cortex_for_Core_Visual_Object_Recognition_/1274068", "title"=>"Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition", "pos_in_sequence"=>0, "defined_type"=>4, "published_date"=>"2014-12-18 02:42:30"}
  • {"files"=>["https://ndownloader.figshare.com/files/1846686"], "description"=>"<p>A) The median predictions of IT multi-unit responses averaged over 10 train/test splits is plotted for model representations and V4 multi-units. Error bars indicate standard deviation over the 10 train/test splits. Predictions are normalized to correct for trial-to-trial variability of the IT multi-unit recording and calculated as percentage of explained, explainable variance. The HMO, Krizhevsky et al. 2012, and Zeiler & Fergus 2013 representations achieve IT multi-unit predictions that are comparable to the predictions produced by the V4 multi-unit representation. B) The mean predictions over the 10 train/test splits for the V4 cortex multi-unit sample and the Zeiler & Fergus 2013 DNN are plotted against each other for each IT multi-unit site.</p>", "links"=>[], "tags"=>["object recognition", "object recognition performance", "dnn", "limitation", "classifier training examples", "object recognition task", "Deep Neural Networks Rival", "model"], "article_id"=>1274065, "categories"=>["Uncategorised"], "users"=>["Charles F. Cadieu", "Ha Hong", "Daniel L. K. Yamins", "Nicolas Pinto", "Diego Ardila", "Ethan A. Solomon", "Najib J. Majaj", "James J. DiCarlo"], "doi"=>"https://dx.doi.org/10.1371/journal.pcbi.1003963.g006", "stats"=>{"downloads"=>0, "page_views"=>19, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Neural_and_model_representation_predictions_of_IT_multi_unit_responses_/1274065", "title"=>"Neural and model representation predictions of IT multi-unit responses.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2014-12-18 02:42:30"}
  • {"files"=>["https://ndownloader.figshare.com/files/1846685"], "description"=>"<p>Testing set classification accuracy averaged over 10 randomly-sampled test sets is plotted and error bars indicate standard deviation over the 10 random samples. Chance performance is ∼14.3%. V4 and IT Cortex Multi-Unit Sample are the values measured directly from the neural samples. Following the analysis in <a href=\"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi-1003963-g003\" target=\"_blank\">Fig. 3A</a>, the model representations have been modified such that they are both subsampled and have noise added that is matched to the observed IT multi-unit noise. We indicate this modification by the symbol. Both model and neural representations are subsampled to 80 multi-unit samples or 80 features. Mirroring the results using kernel analysis, the IT cortex multi-unit sample achieves high generalization accuracy and is only matched in performance by the Zeiler & Fergus 2013 representation.</p>", "links"=>[], "tags"=>["object recognition", "object recognition performance", "dnn", "limitation", "classifier training examples", "object recognition task", "Deep Neural Networks Rival", "model"], "article_id"=>1274064, "categories"=>["Uncategorised"], "users"=>["Charles F. Cadieu", "Ha Hong", "Daniel L. K. Yamins", "Nicolas Pinto", "Diego Ardila", "Ethan A. Solomon", "Najib J. Majaj", "James J. DiCarlo"], "doi"=>"https://dx.doi.org/10.1371/journal.pcbi.1003963.g005", "stats"=>{"downloads"=>1, "page_views"=>22, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Linear_SVM_generalization_performance_of_neural_and_model_representations_/1274064", "title"=>"Linear-SVM generalization performance of neural and model representations.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2014-12-18 02:42:30"}
  • {"files"=>["https://ndownloader.figshare.com/files/1846683"], "description"=>"<p>We measure the area-under-the-curve of the kernel analysis measurement as we change the number of neural sites (for neural representations), or the number of features (for model representations). Measured samples are indicated by filled symbols and measured standard deviations indicated by error bars. Multi-unit analysis is shown in panel A and single-unit analysis in B. The model representations are noise corrected by adding noise that is matched to the IT multi-unit measurements (A, as indicated by the symbol) or single-unit measurements (B, as indicated by the symbol). For the multi-unit analysis, the Zeiler & Fergus 2013 representation rivals the IT cortex representation over our measured sample. For the single-unit analysis, the Krizhevsky et al. 2012 representation rivals the IT cortex representation for low number of features and slightly surpasses it for higher number of features. The Zeiler & Fergus 2013 representation surpasses the IT cortex representation over our measured sample.</p>", "links"=>[], "tags"=>["object recognition", "object recognition performance", "dnn", "limitation", "classifier training examples", "object recognition task", "Deep Neural Networks Rival", "model"], "article_id"=>1274062, "categories"=>["Uncategorised"], "users"=>["Charles F. Cadieu", "Ha Hong", "Daniel L. K. Yamins", "Nicolas Pinto", "Diego Ardila", "Ethan A. Solomon", "Najib J. Majaj", "James J. DiCarlo"], "doi"=>"https://dx.doi.org/10.1371/journal.pcbi.1003963.g004", "stats"=>{"downloads"=>1, "page_views"=>28, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Effect_of_sampling_the_neural_and_noise_corrected_model_representations_/1274062", "title"=>"Effect of sampling the neural and noise-corrected model representations.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2014-12-18 02:42:30"}
  • {"files"=>["https://ndownloader.figshare.com/files/1846679"], "description"=>"<p>Two of the 1960 tested images are shown from the categories Cars, Fruits, and Animals (we also tested the categories Planes, Chairs, Tables, and Faces). Variability within each category consisted of changes to object exemplar (e.g. 7 different types of Animals), geometric transformations due to position, scale, and rotation/pose, and changes to background (each background image is unique).</p>", "links"=>[], "tags"=>["object recognition", "object recognition performance", "dnn", "limitation", "classifier training examples", "object recognition task", "Deep Neural Networks Rival", "model"], "article_id"=>1274058, "categories"=>["Uncategorised"], "users"=>["Charles F. Cadieu", "Ha Hong", "Daniel L. K. Yamins", "Nicolas Pinto", "Diego Ardila", "Ethan A. Solomon", "Najib J. Majaj", "James J. DiCarlo"], "doi"=>"https://dx.doi.org/10.1371/journal.pcbi.1003963.g001", "stats"=>{"downloads"=>0, "page_views"=>24, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Example_images_used_to_measure_object_category_recognition_performance_/1274058", "title"=>"Example images used to measure object category recognition performance.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2014-12-18 02:42:30"}
  • {"files"=>["https://ndownloader.figshare.com/files/1846687"], "description"=>"<p>A) Following the proposed analysis in <a href=\"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi.1003963-Kriegeskorte2\" target=\"_blank\">[32]</a>, the object-level dissimilarity matrix for the IT multi-unit representation is compared to the matrices computed from the model representations and from the V4 multi-unit representation. Each bar indicates the similarity between the corresponding representation and the IT multi-unit representation as measured by the Spearman correlation between dissimilarity matrices. Error bars indicate standard deviation over 10 splits. The IT Cortex Split-Half bar indicates the deviation measured by comparing half of the multi-unit sites to the other half, measured over 50 repetitions. The V1-like, V2-like, and HMAX representations are highly dissimilar to IT cortex. The HMO representation produces comparable deviations from IT as the V4 multi-unit representation while the Krizhevsky et al. 2012 and Zeiler & Fergus 2013 representations fall in-between the V4 representation and the IT cortex split-half measurement. The representations with an appended “+ IT-fit” follow the methodology in <a href=\"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#pcbi.1003963-Yamins1\" target=\"_blank\">[27]</a>, which first predicts IT multi-unit responses from the model representation and then uses these predictions to form a new representation (see text). B) Depictions of the object-level RDMs for select representations. Each matrix is ordered by object category (animals, cars, chairs, etc.) and scaled independently (see color bar). For the “+ IT-fit” representations, the feature for each image was averaged across testing set predictions before computing the RDM (see <a href=\"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003963#s4\" target=\"_blank\">Methods</a>).</p>", "links"=>[], "tags"=>["object recognition", "object recognition performance", "dnn", "limitation", "classifier training examples", "object recognition task", "Deep Neural Networks Rival", "model"], "article_id"=>1274066, "categories"=>["Uncategorised"], "users"=>["Charles F. Cadieu", "Ha Hong", "Daniel L. K. Yamins", "Nicolas Pinto", "Diego Ardila", "Ethan A. Solomon", "Najib J. Majaj", "James J. DiCarlo"], "doi"=>"https://dx.doi.org/10.1371/journal.pcbi.1003963.g007", "stats"=>{"downloads"=>9, "page_views"=>83, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Object_level_representational_similarity_analysis_comparing_model_and_neural_representations_to_the_IT_multi_unit_representation_/1274066", "title"=>"Object-level representational similarity analysis comparing model and neural representations to the IT multi-unit representation.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2014-12-18 02:42:30"}

PMC Usage Stats | Further Information

  • {"unique-ip"=>"1", "full-text"=>"1", "pdf"=>"1", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2014", "month"=>"12"}
  • {"unique-ip"=>"28", "full-text"=>"18", "pdf"=>"25", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2015", "month"=>"1"}
  • {"unique-ip"=>"14", "full-text"=>"16", "pdf"=>"10", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2015", "month"=>"2"}
  • {"unique-ip"=>"9", "full-text"=>"8", "pdf"=>"5", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2015", "month"=>"3"}
  • {"unique-ip"=>"10", "full-text"=>"11", "pdf"=>"0", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2015", "month"=>"4"}
  • {"unique-ip"=>"8", "full-text"=>"14", "pdf"=>"6", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"3", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2015", "month"=>"5"}
  • {"unique-ip"=>"8", "full-text"=>"7", "pdf"=>"3", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2015", "month"=>"6"}
  • {"unique-ip"=>"11", "full-text"=>"10", "pdf"=>"4", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"4", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2015", "month"=>"7"}
  • {"unique-ip"=>"12", "full-text"=>"8", "pdf"=>"7", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2015", "month"=>"8"}
  • {"unique-ip"=>"14", "full-text"=>"10", "pdf"=>"5", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2015", "month"=>"9"}
  • {"unique-ip"=>"16", "full-text"=>"16", "pdf"=>"3", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2015", "month"=>"10"}
  • {"unique-ip"=>"21", "full-text"=>"10", "pdf"=>"11", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"0", "cited-by"=>"1", "year"=>"2015", "month"=>"11"}
  • {"unique-ip"=>"13", "full-text"=>"9", "pdf"=>"8", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2015", "month"=>"12"}
  • {"unique-ip"=>"14", "full-text"=>"5", "pdf"=>"3", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"11", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"1"}
  • {"unique-ip"=>"16", "full-text"=>"20", "pdf"=>"4", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"2"}
  • {"unique-ip"=>"15", "full-text"=>"19", "pdf"=>"8", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"3"}
  • {"unique-ip"=>"4", "full-text"=>"4", "pdf"=>"1", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"3", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"4"}
  • {"unique-ip"=>"5", "full-text"=>"4", "pdf"=>"3", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"5"}
  • {"unique-ip"=>"8", "full-text"=>"5", "pdf"=>"5", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"6"}
  • {"unique-ip"=>"12", "full-text"=>"7", "pdf"=>"4", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"1", "cited-by"=>"0", "year"=>"2016", "month"=>"7"}
  • {"unique-ip"=>"11", "full-text"=>"13", "pdf"=>"5", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"3", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"8"}
  • {"unique-ip"=>"6", "full-text"=>"6", "pdf"=>"4", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"9"}
  • {"unique-ip"=>"10", "full-text"=>"10", "pdf"=>"2", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"3", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"10"}
  • {"unique-ip"=>"5", "full-text"=>"3", "pdf"=>"2", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"11"}
  • {"unique-ip"=>"14", "full-text"=>"14", "pdf"=>"5", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"12"}
  • {"unique-ip"=>"5", "full-text"=>"2", "pdf"=>"2", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"3", "supp-data"=>"0", "cited-by"=>"1", "year"=>"2017", "month"=>"1"}
  • {"unique-ip"=>"3", "full-text"=>"1", "pdf"=>"3", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"2"}
  • {"unique-ip"=>"15", "full-text"=>"10", "pdf"=>"10", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"3"}
  • {"unique-ip"=>"5", "full-text"=>"4", "pdf"=>"3", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"4"}
  • {"unique-ip"=>"6", "full-text"=>"8", "pdf"=>"1", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"5"}
  • {"unique-ip"=>"8", "full-text"=>"7", "pdf"=>"5", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"6"}
  • {"unique-ip"=>"5", "full-text"=>"2", "pdf"=>"3", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"3", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"7"}
  • {"unique-ip"=>"5", "full-text"=>"4", "pdf"=>"1", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"8"}
  • {"unique-ip"=>"9", "full-text"=>"6", "pdf"=>"4", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"9"}
  • {"unique-ip"=>"17", "full-text"=>"15", "pdf"=>"5", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"10"}
  • {"unique-ip"=>"13", "full-text"=>"7", "pdf"=>"11", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"11"}
  • {"unique-ip"=>"13", "full-text"=>"10", "pdf"=>"8", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"12"}
  • {"unique-ip"=>"10", "full-text"=>"10", "pdf"=>"2", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"1"}
  • {"unique-ip"=>"1", "full-text"=>"1", "pdf"=>"0", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"2"}
  • {"unique-ip"=>"12", "full-text"=>"9", "pdf"=>"4", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"3"}
  • {"unique-ip"=>"22", "full-text"=>"27", "pdf"=>"6", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"1", "cited-by"=>"0", "year"=>"2019", "month"=>"1"}
  • {"unique-ip"=>"8", "full-text"=>"8", "pdf"=>"5", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"4"}
  • {"unique-ip"=>"25", "full-text"=>"29", "pdf"=>"1", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"5"}
  • {"unique-ip"=>"13", "full-text"=>"12", "pdf"=>"5", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"6"}
  • {"unique-ip"=>"14", "full-text"=>"14", "pdf"=>"6", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"1", "cited-by"=>"1", "year"=>"2018", "month"=>"7"}
  • {"unique-ip"=>"7", "full-text"=>"8", "pdf"=>"2", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"8"}
  • {"unique-ip"=>"14", "full-text"=>"15", "pdf"=>"6", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"9"}
  • {"unique-ip"=>"16", "full-text"=>"14", "pdf"=>"11", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"10"}
  • {"unique-ip"=>"11", "full-text"=>"10", "pdf"=>"3", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"6", "cited-by"=>"0", "year"=>"2018", "month"=>"11"}
  • {"unique-ip"=>"14", "full-text"=>"12", "pdf"=>"4", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"12"}
  • {"unique-ip"=>"17", "full-text"=>"18", "pdf"=>"4", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2019", "month"=>"2"}
  • {"unique-ip"=>"27", "full-text"=>"24", "pdf"=>"7", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2019", "month"=>"3"}
  • {"unique-ip"=>"18", "full-text"=>"20", "pdf"=>"3", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2019", "month"=>"4"}
  • {"unique-ip"=>"21", "full-text"=>"20", "pdf"=>"4", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"6", "supp-data"=>"4", "cited-by"=>"0", "year"=>"2019", "month"=>"5"}
  • {"unique-ip"=>"10", "full-text"=>"10", "pdf"=>"3", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2019", "month"=>"8"}
  • {"unique-ip"=>"15", "full-text"=>"26", "pdf"=>"3", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2019", "month"=>"9"}
  • {"unique-ip"=>"16", "full-text"=>"17", "pdf"=>"2", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2019", "month"=>"10"}

Relative Metric

{"start_date"=>"2014-01-01T00:00:00Z", "end_date"=>"2014-12-31T00:00:00Z", "subject_areas"=>[]}
Loading … Spinner
There are currently no alerts