Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task
Events
Loading … Spinner

Mendeley | Further Information

{"title"=>"Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task", "type"=>"journal", "authors"=>[{"first_name"=>"Thomas", "last_name"=>"Akam", "scopus_author_id"=>"36776755500"}, {"first_name"=>"Rui", "last_name"=>"Costa", "scopus_author_id"=>"7203063633"}, {"first_name"=>"Peter", "last_name"=>"Dayan", "scopus_author_id"=>"7006914663"}], "year"=>2015, "source"=>"PLoS Computational Biology", "identifiers"=>{"issn"=>"15537358", "doi"=>"10.1371/journal.pcbi.1004648", "sgr"=>"84953233970", "scopus"=>"2-s2.0-84953233970", "isbn"=>"10.1371/journal.pcbi.1004648", "pmid"=>"26657806", "pui"=>"607623934"}, "id"=>"b3a80210-4cfd-32f3-8ce1-db19d1e19779", "abstract"=>"Author Summary Planning is the use of a predictive model of the consequences of actions to guide decision making. Planning plays a critical role in human behaviour, but isolating its contribution is challenging because it is complemented by control systems which learn values of actions directly from the history of reinforcement, resulting in automatized mappings from states to actions often termed habits. Our study examined a recently developed behavioural task which uses choices in a multi-step decision tree to differentiate planning from value-based control. We compared various strategies using simulations, showing a range that produce behaviour that resembles planning but in fact arises as a fixed mapping from particular sorts of states to action. These results show that when a planning problem is faced repeatedly, sophisticated automatization strategies may be developed which identify that there are in fact a limited number of relevant states of the world each with an appropriate fixed or habitual response. Understanding such strategies is important for the design and interpretation of tasks which aim to isolate the contribution of planning to behaviour. Such strategies are also of independent scientific interest as they may contribute to automatization of behaviour in complex environments.", "link"=>"http://www.mendeley.com/research/simple-plans-sophisticated-habits-state-transition-learning-interactions-twostep-task", "reader_count"=>118, "reader_count_by_academic_status"=>{"Unspecified"=>4, "Professor > Associate Professor"=>2, "Student > Doctoral Student"=>12, "Researcher"=>21, "Student > Ph. D. Student"=>31, "Student > Postgraduate"=>7, "Student > Master"=>19, "Other"=>3, "Student > Bachelor"=>11, "Lecturer"=>1, "Professor"=>7}, "reader_count_by_user_role"=>{"Unspecified"=>4, "Professor > Associate Professor"=>2, "Student > Doctoral Student"=>12, "Researcher"=>21, "Student > Ph. D. Student"=>31, "Student > Postgraduate"=>7, "Student > Master"=>19, "Other"=>3, "Student > Bachelor"=>11, "Lecturer"=>1, "Professor"=>7}, "reader_count_by_subject_area"=>{"Engineering"=>2, "Unspecified"=>9, "Biochemistry, Genetics and Molecular Biology"=>1, "Mathematics"=>1, "Agricultural and Biological Sciences"=>21, "Medicine and Dentistry"=>9, "Neuroscience"=>29, "Physics and Astronomy"=>1, "Psychology"=>33, "Social Sciences"=>1, "Computer Science"=>10, "Decision Sciences"=>1}, "reader_count_by_subdiscipline"=>{"Engineering"=>{"Engineering"=>2}, "Medicine and Dentistry"=>{"Medicine and Dentistry"=>9}, "Neuroscience"=>{"Neuroscience"=>29}, "Social Sciences"=>{"Social Sciences"=>1}, "Decision Sciences"=>{"Decision Sciences"=>1}, "Physics and Astronomy"=>{"Physics and Astronomy"=>1}, "Psychology"=>{"Psychology"=>33}, "Agricultural and Biological Sciences"=>{"Agricultural and Biological Sciences"=>21}, "Computer Science"=>{"Computer Science"=>10}, "Biochemistry, Genetics and Molecular Biology"=>{"Biochemistry, Genetics and Molecular Biology"=>1}, "Mathematics"=>{"Mathematics"=>1}, "Unspecified"=>{"Unspecified"=>9}}, "reader_count_by_country"=>{"United States"=>1, "Japan"=>1, "Denmark"=>1, "United Kingdom"=>3, "Germany"=>4}, "group_count"=>5}

Scopus | Further Information

{"@_fa"=>"true", "link"=>[{"@_fa"=>"true", "@ref"=>"self", "@href"=>"https://api.elsevier.com/content/abstract/scopus_id/84953233970"}, {"@_fa"=>"true", "@ref"=>"author-affiliation", "@href"=>"https://api.elsevier.com/content/abstract/scopus_id/84953233970?field=author,affiliation"}, {"@_fa"=>"true", "@ref"=>"scopus", "@href"=>"https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84953233970&origin=inward"}, {"@_fa"=>"true", "@ref"=>"scopus-citedby", "@href"=>"https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=84953233970&origin=inward"}], "prism:url"=>"https://api.elsevier.com/content/abstract/scopus_id/84953233970", "dc:identifier"=>"SCOPUS_ID:84953233970", "eid"=>"2-s2.0-84953233970", "dc:title"=>"Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task", "dc:creator"=>"Akam T.", "prism:publicationName"=>"PLoS Computational Biology", "prism:issn"=>"1553734X", "prism:eIssn"=>"15537358", "prism:volume"=>"11", "prism:issueIdentifier"=>"12", "prism:pageRange"=>nil, "prism:coverDate"=>"2015-01-01", "prism:coverDisplayDate"=>"2015", "prism:doi"=>"10.1371/journal.pcbi.1004648", "citedby-count"=>"18", "affiliation"=>[{"@_fa"=>"true", "affilname"=>"Champalimaud Centre for the Unknown", "affiliation-city"=>"Lisbon", "affiliation-country"=>"Portugal"}, {"@_fa"=>"true", "affilname"=>"University of Oxford Medical Sciences Division", "affiliation-city"=>"Oxford", "affiliation-country"=>"United Kingdom"}], "pubmed-id"=>"26657806", "prism:aggregationType"=>"Journal", "subtype"=>"ar", "subtypeDescription"=>"Article", "article-number"=>"e1004648", "source-id"=>"4000151810", "openaccess"=>"1", "openaccessFlag"=>true}

Facebook

  • {"url"=>"http%3A%2F%2Fjournals.plos.org%2Fploscompbiol%2Farticle%3Fid%3D10.1371%252Fjournal.pcbi.1004648", "share_count"=>0, "like_count"=>0, "comment_count"=>0, "click_count"=>0, "total_count"=>0}

Counter

  • {"month"=>"12", "year"=>"2015", "pdf_views"=>"46", "xml_views"=>"3", "html_views"=>"908"}
  • {"month"=>"1", "year"=>"2016", "pdf_views"=>"55", "xml_views"=>"1", "html_views"=>"360"}
  • {"month"=>"2", "year"=>"2016", "pdf_views"=>"30", "xml_views"=>"0", "html_views"=>"259"}
  • {"month"=>"3", "year"=>"2016", "pdf_views"=>"20", "xml_views"=>"0", "html_views"=>"227"}
  • {"month"=>"4", "year"=>"2016", "pdf_views"=>"55", "xml_views"=>"0", "html_views"=>"201"}
  • {"month"=>"5", "year"=>"2016", "pdf_views"=>"40", "xml_views"=>"0", "html_views"=>"142"}
  • {"month"=>"6", "year"=>"2016", "pdf_views"=>"22", "xml_views"=>"0", "html_views"=>"92"}
  • {"month"=>"7", "year"=>"2016", "pdf_views"=>"15", "xml_views"=>"0", "html_views"=>"95"}
  • {"month"=>"8", "year"=>"2016", "pdf_views"=>"9", "xml_views"=>"0", "html_views"=>"65"}
  • {"month"=>"9", "year"=>"2016", "pdf_views"=>"16", "xml_views"=>"0", "html_views"=>"63"}
  • {"month"=>"10", "year"=>"2016", "pdf_views"=>"25", "xml_views"=>"0", "html_views"=>"94"}
  • {"month"=>"11", "year"=>"2016", "pdf_views"=>"18", "xml_views"=>"1", "html_views"=>"96"}
  • {"month"=>"12", "year"=>"2016", "pdf_views"=>"25", "xml_views"=>"0", "html_views"=>"96"}
  • {"month"=>"1", "year"=>"2017", "pdf_views"=>"20", "xml_views"=>"0", "html_views"=>"94"}
  • {"month"=>"2", "year"=>"2017", "pdf_views"=>"31", "xml_views"=>"2", "html_views"=>"80"}
  • {"month"=>"3", "year"=>"2017", "pdf_views"=>"38", "xml_views"=>"0", "html_views"=>"109"}
  • {"month"=>"4", "year"=>"2017", "pdf_views"=>"32", "xml_views"=>"0", "html_views"=>"85"}
  • {"month"=>"5", "year"=>"2017", "pdf_views"=>"23", "xml_views"=>"2", "html_views"=>"73"}
  • {"month"=>"6", "year"=>"2017", "pdf_views"=>"14", "xml_views"=>"0", "html_views"=>"71"}
  • {"month"=>"7", "year"=>"2017", "pdf_views"=>"18", "xml_views"=>"0", "html_views"=>"64"}
  • {"month"=>"8", "year"=>"2017", "pdf_views"=>"12", "xml_views"=>"1", "html_views"=>"53"}
  • {"month"=>"9", "year"=>"2017", "pdf_views"=>"22", "xml_views"=>"1", "html_views"=>"64"}
  • {"month"=>"10", "year"=>"2017", "pdf_views"=>"18", "xml_views"=>"1", "html_views"=>"69"}
  • {"month"=>"11", "year"=>"2017", "pdf_views"=>"32", "xml_views"=>"0", "html_views"=>"95"}
  • {"month"=>"12", "year"=>"2017", "pdf_views"=>"15", "xml_views"=>"0", "html_views"=>"51"}
  • {"month"=>"1", "year"=>"2018", "pdf_views"=>"15", "xml_views"=>"0", "html_views"=>"47"}
  • {"month"=>"2", "year"=>"2018", "pdf_views"=>"16", "xml_views"=>"0", "html_views"=>"52"}
  • {"month"=>"3", "year"=>"2018", "pdf_views"=>"19", "xml_views"=>"0", "html_views"=>"53"}
  • {"month"=>"4", "year"=>"2018", "pdf_views"=>"22", "xml_views"=>"0", "html_views"=>"48"}
  • {"month"=>"5", "year"=>"2018", "pdf_views"=>"22", "xml_views"=>"0", "html_views"=>"67"}
  • {"month"=>"6", "year"=>"2018", "pdf_views"=>"28", "xml_views"=>"0", "html_views"=>"42"}
  • {"month"=>"7", "year"=>"2018", "pdf_views"=>"32", "xml_views"=>"3", "html_views"=>"49"}
  • {"month"=>"8", "year"=>"2018", "pdf_views"=>"15", "xml_views"=>"2", "html_views"=>"43"}
  • {"month"=>"9", "year"=>"2018", "pdf_views"=>"10", "xml_views"=>"1", "html_views"=>"62"}
  • {"month"=>"10", "year"=>"2018", "pdf_views"=>"21", "xml_views"=>"1", "html_views"=>"52"}
  • {"month"=>"11", "year"=>"2018", "pdf_views"=>"27", "xml_views"=>"0", "html_views"=>"42"}
  • {"month"=>"12", "year"=>"2018", "pdf_views"=>"33", "xml_views"=>"0", "html_views"=>"72"}
  • {"month"=>"1", "year"=>"2019", "pdf_views"=>"17", "xml_views"=>"0", "html_views"=>"44"}
  • {"month"=>"2", "year"=>"2019", "pdf_views"=>"28", "xml_views"=>"1", "html_views"=>"68"}
  • {"month"=>"3", "year"=>"2019", "pdf_views"=>"25", "xml_views"=>"5", "html_views"=>"76"}
  • {"month"=>"4", "year"=>"2019", "pdf_views"=>"19", "xml_views"=>"1", "html_views"=>"104"}
  • {"month"=>"5", "year"=>"2019", "pdf_views"=>"20", "xml_views"=>"0", "html_views"=>"74"}
  • {"month"=>"6", "year"=>"2019", "pdf_views"=>"19", "xml_views"=>"0", "html_views"=>"48"}

Figshare

  • {"files"=>["https://ndownloader.figshare.com/files/2607419"], "description"=>"<p>(<b>A</b>) Predictor loadings for logistic regression model predicting whether the <i>Q</i>(1) agent will repeat the same choice as a function of 4 predictors; Stay–a tendency to repeat the same choice irrespective of trial events, Outcome–a tendency to repeat the same choice following a rewarded trial, Transition—a tendency to repeat the same choice following common transitions, Transition x outcome interaction–a tendency to repeat the same choice dependent on the interaction between transition (common/rare) and outcome (rewarded/not). (<b>B</b>) Action values at the start of the trial for the chosen and not chosen action shown separately for trials with different transitions (common or rare) and outcomes (rewarded or not). Yellow error bars show SEM across sessions. (<b>C</b>) Predictor loadings for logistic regression model with additional predictor capturing tendency to repeat correct choices, i.e. choices whose common transition lead to the state which currently has high reward probability. (<b>D</b>) Across trial correlation between predictors in logistic regression analysis shown in (<b>C</b>).</p>", "links"=>[], "tags"=>["Behavioural Performance", "behavioural neuroscience", "decision variables", "Sophisticated Habits", "strategy", "Learning Interactions", "action values", "analysis", "task structure", "reinforcement", "contingencies optimally", "trial events", "correlation", "Simple Plans"], "article_id"=>1623872, "categories"=>["Biological Sciences", "Science Policy"], "users"=>["Thomas Akam", "Rui Costa", "Peter Dayan"], "doi"=>"https://dx.doi.org/10.1371/journal.pcbi.1004648.g002", "stats"=>{"downloads"=>0, "page_views"=>0, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Stay_probability_transition_outcome_interaction_for_Q_1_agent_due_to_trial_start_action_values_/1623872", "title"=>"Stay probability transition-outcome interaction for <i>Q</i>(1) agent due to trial start action values.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2015-12-17 08:09:02"}
  • {"files"=>["https://ndownloader.figshare.com/files/2607420"], "description"=>"<p>Comparison of the behaviour of all agents types discussed in the paper on the reduced task. Far left panels–Stay probability plots. Centre left panels—Predictor loadings for logistic regression model predicting whether the agent will repeat the same choice as a function of 4 predictors; Stay–a tendency to repeat the same choice irrespective of trial events, Outcome–a tendency to repeat the same choice following a rewarded trial, Transition—a tendency to repeat the same choice following common transitions, Transition x outcome interaction–a tendency to repeat the same choice dependent on the interaction between transition (common/rare) and outcome (rewarded/not). Centre right panels–Predictor loadings for logistic regression analysis with additional ‘correct’ predictor which captures a tendency to repeat correct choices. Right panels—Predictor loadings for lagged logistic regression model. The model uses a set of 4 predictors at each lag, each of which captures how a given combination of transition (common/rare) and outcome (rewarded/not) predicts whether the agent will repeat the choice a given number of trials in the future, e.g, the ‘rewarded, rare’ predictor at lag -2 captures the extent to which receiving a reward following a rare transition predicts that the agent will choose the same action two trials later. Legend for right panels is at bottom of figure. Error bars in all plots show SEM across sessions. Agent types: (<b>A-D</b>) <i>Q</i>(1), (<b>E-H</b>) Model-based, (<b>I-L</b>) <i>Q</i>(0), (<b>M-P</b>) Reward-as-cue, (<b>Q-T</b>) Latent-state.</p>", "links"=>[], "tags"=>["Behavioural Performance", "behavioural neuroscience", "decision variables", "Sophisticated Habits", "strategy", "Learning Interactions", "action values", "analysis", "task structure", "reinforcement", "contingencies optimally", "trial events", "correlation", "Simple Plans"], "article_id"=>1623873, "categories"=>["Biological Sciences", "Science Policy"], "users"=>["Thomas Akam", "Rui Costa", "Peter Dayan"], "doi"=>"https://dx.doi.org/10.1371/journal.pcbi.1004648.g003", "stats"=>{"downloads"=>0, "page_views"=>0, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Comparison_of_agents_8217_behaviour_8211_reduced_task_/1623873", "title"=>"Comparison of agents’ behaviour–reduced task.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2015-12-17 08:09:02"}
  • {"files"=>["https://ndownloader.figshare.com/files/2607421"], "description"=>"<p>Performance achieved by different agent types in the original (<b>A</b>) and reduced (<b>B</b>) tasks, with parameter values optimised to maximise the fraction of trials rewarded. For the reward as cue agent, performance is shown for a fixed strategy of choosing action A (B) following reward in state <i>a</i> (<i>b</i>) and action B (A) following non-reward in state <i>a</i> (<i>b</i>). SEM error bars shown in red. Significant differences indicated by *: 5 < 0.05, ** P < 10<sup>−5</sup>.</p>", "links"=>[], "tags"=>["Behavioural Performance", "behavioural neuroscience", "decision variables", "Sophisticated Habits", "strategy", "Learning Interactions", "action values", "analysis", "task structure", "reinforcement", "contingencies optimally", "trial events", "correlation", "Simple Plans"], "article_id"=>1623874, "categories"=>["Biological Sciences", "Science Policy"], "users"=>["Thomas Akam", "Rui Costa", "Peter Dayan"], "doi"=>"https://dx.doi.org/10.1371/journal.pcbi.1004648.g004", "stats"=>{"downloads"=>1, "page_views"=>0, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Comparison_of_agents_8217_performance_/1623874", "title"=>"Comparison of agents’ performance.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2015-12-17 08:09:02"}
  • {"files"=>["https://ndownloader.figshare.com/files/2607422"], "description"=>"<p>Data likelihood for maximum likelihood fits of different agent types (indicated by x-axis labels; MB–Model based, RC–Reward-as-cue, LS–Latent-state) to data simulated from each agent type (indicted by labels above axes) on the reduced (<b>A-E</b>) and original (<b>F-J</b>) tasks. All differences in data likelihood between different agents fit to the same data are significant at P < 10<sup>−4</sup> except for that between the fit of the reward-as-cue and latent-state agents to data simulated from the reward-as-cue agent which is significant at P = 0.027.</p>", "links"=>[], "tags"=>["Behavioural Performance", "behavioural neuroscience", "decision variables", "Sophisticated Habits", "strategy", "Learning Interactions", "action values", "analysis", "task structure", "reinforcement", "contingencies optimally", "trial events", "correlation", "Simple Plans"], "article_id"=>1623875, "categories"=>["Biological Sciences", "Science Policy"], "users"=>["Thomas Akam", "Rui Costa", "Peter Dayan"], "doi"=>"https://dx.doi.org/10.1371/journal.pcbi.1004648.g005", "stats"=>{"downloads"=>0, "page_views"=>0, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Likelihood_comparison_/1623875", "title"=>"Likelihood comparison.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2015-12-17 08:09:03"}
  • {"files"=>["https://ndownloader.figshare.com/files/2607423", "https://ndownloader.figshare.com/files/2607424", "https://ndownloader.figshare.com/files/2607425", "https://ndownloader.figshare.com/files/2607426", "https://ndownloader.figshare.com/files/2607427", "https://ndownloader.figshare.com/files/2607428", "https://ndownloader.figshare.com/files/2607429", "https://ndownloader.figshare.com/files/2607430"], "description"=>"<div><p>The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects’ investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.</p></div>", "links"=>[], "tags"=>["Behavioural Performance", "behavioural neuroscience", "decision variables", "Sophisticated Habits", "strategy", "Learning Interactions", "action values", "analysis", "task structure", "reinforcement", "contingencies optimally", "trial events", "correlation", "Simple Plans"], "article_id"=>1623876, "categories"=>["Biological Sciences", "Science Policy"], "users"=>["Thomas Akam", "Rui Costa", "Peter Dayan"], "doi"=>["https://dx.doi.org/10.1371/journal.pcbi.1004648.s001", "https://dx.doi.org/10.1371/journal.pcbi.1004648.s002", "https://dx.doi.org/10.1371/journal.pcbi.1004648.s003", "https://dx.doi.org/10.1371/journal.pcbi.1004648.s004", "https://dx.doi.org/10.1371/journal.pcbi.1004648.s005", "https://dx.doi.org/10.1371/journal.pcbi.1004648.s006", "https://dx.doi.org/10.1371/journal.pcbi.1004648.s007", "https://dx.doi.org/10.1371/journal.pcbi.1004648.s008"], "stats"=>{"downloads"=>10, "page_views"=>0, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Simple_Plans_or_Sophisticated_Habits_State_Transition_and_Learning_Interactions_in_the_Two_Step_Task_/1623876", "title"=>"Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task", "pos_in_sequence"=>0, "defined_type"=>4, "published_date"=>"2015-12-17 08:09:05"}
  • {"files"=>["https://ndownloader.figshare.com/files/2607418"], "description"=>"<p>(<b>A, B</b>) Diagram of task structure for original (<b>A</b>) and reduced (<b>B</b>) two step tasks. (<b>C</b>, <b>D</b>) Example reward probability trajectories for the second-step actions in each task. (<b>E—H</b>) Stay probability plots for <i>Q</i>(1) (<b>E</b>,<b>G</b>) and model-based (<b>F, H</b>) agents on the two task versions. Plots show the fraction of trials on which the agent repeated its choice following rewarded and non-rewarded trials with common and rare transitions (SEM error bars shown in red). (<b>I, J</b>) Performance (fraction of trials rewarded) achieved by <i>Q</i>(1) and model based agents, and by an agent which chooses randomly at the first step. Agent parameters in (<b>I</b>,<b>J</b>) have been optimised to maximise the fraction of rewarded trials.</p>", "links"=>[], "tags"=>["Behavioural Performance", "behavioural neuroscience", "decision variables", "Sophisticated Habits", "strategy", "Learning Interactions", "action values", "analysis", "task structure", "reinforcement", "contingencies optimally", "trial events", "correlation", "Simple Plans"], "article_id"=>1623871, "categories"=>["Biological Sciences", "Science Policy"], "users"=>["Thomas Akam", "Rui Costa", "Peter Dayan"], "doi"=>"https://dx.doi.org/10.1371/journal.pcbi.1004648.g001", "stats"=>{"downloads"=>0, "page_views"=>0, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Original_and_reduced_versions_of_the_two_step_task_/1623871", "title"=>"Original and reduced versions of the two-step task.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2015-12-17 08:09:08"}

PMC Usage Stats | Further Information

  • {"unique-ip"=>"20", "full-text"=>"13", "pdf"=>"9", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"5", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"1"}
  • {"unique-ip"=>"12", "full-text"=>"11", "pdf"=>"10", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"2"}
  • {"unique-ip"=>"13", "full-text"=>"13", "pdf"=>"8", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"3"}
  • {"unique-ip"=>"14", "full-text"=>"11", "pdf"=>"4", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"5", "supp-data"=>"1", "cited-by"=>"0", "year"=>"2016", "month"=>"4"}
  • {"unique-ip"=>"20", "full-text"=>"20", "pdf"=>"5", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"3", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"5"}
  • {"unique-ip"=>"14", "full-text"=>"15", "pdf"=>"3", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"1", "cited-by"=>"0", "year"=>"2016", "month"=>"6"}
  • {"unique-ip"=>"9", "full-text"=>"8", "pdf"=>"2", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"5", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"7"}
  • {"unique-ip"=>"5", "full-text"=>"9", "pdf"=>"0", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"8"}
  • {"unique-ip"=>"13", "full-text"=>"12", "pdf"=>"0", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"9"}
  • {"unique-ip"=>"11", "full-text"=>"9", "pdf"=>"4", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"10"}
  • {"unique-ip"=>"6", "full-text"=>"4", "pdf"=>"1", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"11"}
  • {"unique-ip"=>"8", "full-text"=>"8", "pdf"=>"0", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"3", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2016", "month"=>"12"}
  • {"unique-ip"=>"7", "full-text"=>"6", "pdf"=>"0", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"1"}
  • {"unique-ip"=>"6", "full-text"=>"4", "pdf"=>"6", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"1", "year"=>"2017", "month"=>"2"}
  • {"unique-ip"=>"3", "full-text"=>"3", "pdf"=>"0", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"3"}
  • {"unique-ip"=>"15", "full-text"=>"17", "pdf"=>"3", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"3", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"4"}
  • {"unique-ip"=>"7", "full-text"=>"5", "pdf"=>"3", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"3", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"5"}
  • {"unique-ip"=>"4", "full-text"=>"4", "pdf"=>"1", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"6"}
  • {"unique-ip"=>"6", "full-text"=>"6", "pdf"=>"1", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"3", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"7"}
  • {"unique-ip"=>"4", "full-text"=>"3", "pdf"=>"1", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"8"}
  • {"unique-ip"=>"4", "full-text"=>"3", "pdf"=>"3", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"9"}
  • {"unique-ip"=>"13", "full-text"=>"7", "pdf"=>"2", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"1", "cited-by"=>"3", "year"=>"2017", "month"=>"10"}
  • {"unique-ip"=>"2", "full-text"=>"1", "pdf"=>"0", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"11"}
  • {"unique-ip"=>"8", "full-text"=>"7", "pdf"=>"1", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2017", "month"=>"12"}
  • {"unique-ip"=>"2", "full-text"=>"2", "pdf"=>"2", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"1"}
  • {"unique-ip"=>"7", "full-text"=>"0", "pdf"=>"9", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"2"}
  • {"unique-ip"=>"5", "full-text"=>"4", "pdf"=>"2", "abstract"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"3"}
  • {"unique-ip"=>"8", "full-text"=>"8", "pdf"=>"1", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2019", "month"=>"1"}
  • {"unique-ip"=>"5", "full-text"=>"5", "pdf"=>"1", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"5"}
  • {"unique-ip"=>"4", "full-text"=>"3", "pdf"=>"1", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"4"}
  • {"unique-ip"=>"5", "full-text"=>"3", "pdf"=>"2", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"6"}
  • {"unique-ip"=>"14", "full-text"=>"6", "pdf"=>"1", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"9", "cited-by"=>"0", "year"=>"2018", "month"=>"7"}
  • {"unique-ip"=>"6", "full-text"=>"5", "pdf"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"8"}
  • {"unique-ip"=>"9", "full-text"=>"11", "pdf"=>"2", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"9"}
  • {"unique-ip"=>"15", "full-text"=>"13", "pdf"=>"2", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"2", "cited-by"=>"0", "year"=>"2018", "month"=>"10"}
  • {"unique-ip"=>"10", "full-text"=>"9", "pdf"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"3", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"11"}
  • {"unique-ip"=>"5", "full-text"=>"4", "pdf"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"1", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2018", "month"=>"12"}
  • {"unique-ip"=>"5", "full-text"=>"5", "pdf"=>"0", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"2", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2019", "month"=>"2"}
  • {"unique-ip"=>"2", "full-text"=>"2", "pdf"=>"1", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2019", "month"=>"3"}
  • {"unique-ip"=>"7", "full-text"=>"5", "pdf"=>"2", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"6", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2019", "month"=>"4"}
  • {"unique-ip"=>"13", "full-text"=>"13", "pdf"=>"1", "scanned-summary"=>"0", "scanned-page-browse"=>"0", "figure"=>"0", "supp-data"=>"0", "cited-by"=>"0", "year"=>"2019", "month"=>"5"}

Relative Metric

{"start_date"=>"2015-01-01T00:00:00Z", "end_date"=>"2015-12-31T00:00:00Z", "subject_areas"=>[]}
Loading … Spinner
There are currently no alerts