RoMEO and JULIET Updates for December 2012

RoMEO

Added new publishers:

  • American Association of Zoo Veterinarians [14/12/12] White
  • Hirzel Verlag (S. Hirzel Verlag) [17/12/12] Green
  • Instituto Federal de Educação, Ciência e Tecnologia do Rio de Janeiro [13/12/12]
  • NTNU, Vitenskapsmuseet (Norwegian University of Science and Technology, Museum of Natural History and Archaeology) [13/12/12] – Green
  • Social Sciences Directory [12/12/12] Green

 Total publishers: 1184 [17/12/12] 3 provisional [23/4/12]

 

Updated entries

Jane updated the following publisher entries:

  • American Society of Agronomy– wording of link to full-text, plus policy URL [11/12/12]
  • American Statistical Association – new policyurl [14/12/12]
  • Cancer Intelligence – Commercial Publisher to Independent Journal and new policy url [5/12/12]
  • Crop Science Society of America– wording of link to full-text, plus policy URL [11/12/12]
  • Ecological Society of America – PDF allowed [5/12/12]
  • Royal Society of Victoria – spelling correction to title [13/12/12]
  • Septentrio Academic Publishing policy updated to reflect exceptions [11/12/12]
  • Soil Science Society of America – wording of link to full-text[11/12/12]
  • Tapir Akademisk Forlag (Tapir Academic Press) to Akademika forlag and url update [13/12/12]
  • Thomas Telford – White to Blue [11/12/12]
  • University of Tromso to Septentrio Academic Publishing [11/12/12]

 

Journal Exceptions

  • Septentrio Academic Publishing
    • Nordlit [11/12/12]
    • Nordisk Tidsskrift for Helseforskning [11/12/12]
  • SAGE
    • SAGE Open Attribution [13/12/12]
    • SAGE Open Attribution Non-Commercial [13/12/12]

 

JULIET

Overall Total: 110 [4/12/12]

Publications: 80, Data: 32, OA Journals: 47, No Policy 25, Retired: 2

 

Added:

  • Danish National Research Foundation [4/12/12]
  • Frie Forskningsråd [4/12/12]
  • Højteknologifonden  [4/12/12]
  • Rådet for Teknologi og Innovation [4/12/12]
  • Strategiske Forskningsråd [4/12/12]

 

A huge thank you to Jeffrey Beall

Jeffrey Beall is the author of the important and useful list of predatory open access publishers, available on his Scholarly Open Access blog. Jeffrey reports that there is a concerted effort to discredit him and his work.

To me this just highlights the importance of his work and the complete lack of ethics of those behind this effort. I applaud Jeffrey’s service to the open access community, both through his list and through sharing this experience. Thanks, Jeffrey – please keep up the good work!

Houghton Report on OA Cost/Benefits in Germany


General cost analysis for scholarly communication in Germany: results of the ‘Houghton Report’ for Germany by John W. Houghton, Berndt Dugall, Steffen Bernius, Julia Krönung, Wolfgang König
Management Summary: Conducted within the project ?Economic Implications of New Models for Information Supply for Science and Research in Germany?, the Houghton Report for Germany provides a general cost and benefit analysis for scientific communication in Germany comparing different scenarios according to their specific costs and explicitly including the German National License Program (NLP).
Basing on the scholarly lifecycle process model outlined by Björk (2007), the study compared the following scenarios according to their accounted costs:
– Traditional subscription publishing,
– Open access publishing (Gold Open Access; refers primarily to journal publishing where access is free of charge to readers, while the authors or funding organisations pay for publication)
– Open Access self-archiving (authors deposit their work in online open access institutional or subject-based repositories, making it freely available to anyone with Internet access; further divided into (i) CGreen Open Access? self-archiving operating in parallel with subscription publishing; and (ii) the ?overlay services? model in which self-archiving provides the foundation for overlay services (e.g. peer review, branding and quality control services))
– the NLP.
Within all scenarios, five core activity elements (Fund research and research communication; perform research and communicate the results; publish scientific and scholarly works; facilitate dissemination, retrieval and preservation; study publications and apply the knowledge) were modeled and priced with all their including activities.
Modelling the impacts of an increase in accessibility and efficiency resulting from more open access on returns to R&D over a 20 year period and then comparing costs and benefits, we find that the benefits of open access publishing models are likely to substantially outweigh the costs and, while smaller, the benefits of the German NLP also exceed the costs.
This analysis of the potential benefits of more open access to research findings suggests that different publishing models can make a material difference to the benefits realised, as well as the costs faced. It seems likely that more Open Access would have substantial net benefits in the longer term and, while net benefits may be lower during a transitional period, they are likely to be positive for both ?author-pays? Open Access publishing and the ?over-lay journals? alternatives (?Gold Open Access?), and for parallel subscription publishing and self-archiving (?Green Open Access?). The NLP returns substantial benefits and savings at a modest cost, returning one of the highest benefit/cost ratios available from unilateral national policies during a transitional period (second to that of ?Green Open Access? self-archiving). Whether ?Green Open Access? self-archiving in parallel with subscriptions is a sustainable model over the longer term is debateable, and what impact the NLP may have on the take up of Open Access alternatives is also an important consideration. So too is the potential for developments in Open Access or other scholarly publishing business models to significantly change the relative cost-benefit of the NLP over time.
The results are comparable to those of previous studies from the UK and Netherlands. Green Open Access in parallel with the traditional model yields the best benefits/cost ratio. Beside its benefits/cost ratio, the meaningfulness of the NLP is given by its enforceability. The true costs of toll access publishing (beside the buyback? of information) is the prohibition of access to research and knowledge for society.

Some Comments:

Like previous Houghton Reports, this one has carefully compared unilateral and global cost/benefits for Gold Open Access Publishing and Green Open Access Self-Archiving. In this case, the options also included the German National License Program (NLP), a negotiated national site license providingGerman researchers with access to most of the journals they need.

As it found in other countries, the Report finds that Green OA self-archiving provides the best benefit/cost ratio in Germany too.

It needs to be noted, however, that among the scenarios compared, only subscription publishing (including licensed subscriptions) and Gold OA publishing are publishing models. Green OA self-archiving is not a substitute publishing model but a system of providing OA under the subscription/licensing model — by supplementing it with author self-archiving (and with self-archiving mandates adopted by authors’ institutions and funders).

“Open Access self-archiving? [is] further divided into (i) Green Open Access? self-archiving operating in parallel with subscription publishing; and (ii) the ?overlay services? model in which self-archiving provides the foundation for overlay services (e.g. peer review, branding and quality control services))”

Strictly speaking, the “overlay services model” is just another hypothetical Gold OA publishing model, but one in which the Gold OA fee is only paying for the service of peer-review, branding and quality control rather than for the all the rest of the products and services journals that are currently still being co-bundled in journal subscriptions and their costs (print edition, online edition, access-provision, hosting, archiving).

This hypothetical Gold OA model is predicated, however, on the assumption that there is universal Green OA self-archiving too, in order to perform the access-provision, hosting and archiving functions of what was formerly co-bundled under the subscription model.

Hence for existing journals the “overlay” Gold OA model is really just the second stage of a 2-stage transition that begins with the Green OA self-archiving access-provision system. In such a transition scenario, although Green OA would begin as a supplement to the subscription model, it would become an essential contributor to the sustainability of the overlay Gold OA model.

“comparing costs and benefits? [of] open access on returns to R&D over a 20 year period? we find that the benefits of open access publishing models are likely to substantially outweigh the costs and, while smaller, the benefits of the German NLP also exceed the costs.”

Again, it needs to be kept in mind that what are being compared are not just independent alternative publishing models, but also supplementary means of providing OA; so in some cases there are some very specific sequential contingencies and interdependencies among these models and scenarios.

“The NLP returns substantial benefits and savings at a modest cost, returning one of the highest benefit/cost ratios available from unilateral national policies during a transitional period (second to that of ?Green Open Access? self-archiving).”

I presume that in considering the costs and benefits of German national licensing the Houghton Report considered both the unilateral German national licensing scenario and the scenario if reciprocated globally. In this regard, it should be noted that OA has both user-end benefits [maximized access] and author-end benefits [maximized impact]: Unilateral national licenses provide only the former, not the latter. Both unilateral Green and unilateral Gold, in contrast, provide only the latter but not the former. So what needs to be taken into account is global scalability and sustainability: How likely are other nations (and institutions) to wish — and afford – to reciprocate under the various scenarios?

“Whether ?Green Open Access? self-archiving in parallel with subscriptions is a sustainable model over the longer term is debatable”

First of all, if subscription publishing itself is not a sustainable model, then of course Green OA self-archiving is not a sustainable supplement either.

But in the hypothetical “overlay” Gold OA model it is being assumed that Green OA self-archiving is indeed sustainable — as a practice, not as a substitute form of publishing. (It is naive to think of spawning 28,000 brand-new Gold OA peer-reviewed journals in place of the circa 28,000 journals that exist today: A conversion scenario is much more realistic.)

And probably the most relevant sustainability question is not about the sustainability of the practice of Green OA self-archiving (keystrokes and institutional repositories), nor the sustainability of subscription publishing, but the sustainability of subscription publishing in parallel with universal Green OA self-archiving. One natural possibility is that globally mandated Green OA self-archiving will make journal subscriptions unsustainable, inducing a transition in publishing models, with journals, under cancelation pressure, cutting inessential products and services and their costs, and downsizing to what is being here called the “overlay” Gold OA model (though that’s probably not the aptest term to describe the outcome), while at the same time releasing the subscription cancelation funds to pay the much lower peer review service fees it entails.

“The results are comparable to those of previous studies from the UK and Netherlands. Green Open Access in parallel with the traditional model yields the best benefits/cost ratio.”

And what also need to be taken into account are sequential contingencies and priorities: Green OA self-archiving is not only the cheapest, fastest and surest way to provide OA, but it is also the natural way to induce a subsequent transition to affordable, sustainable Gold OA. But in order to be able to do that, it has to come first.

“Beside its benefits/cost ratio, the meaningfulness of the NLP is given by its enforceability.|

Green OA self-archiving mandates are enforceable too. And global scaleability and sustainability has to be taken into account too, not just local access-provision.

“The true cost of toll access publishing (beside[s] the [cost of the] “buyback? of information) is the prohibition of access to research and knowledge for society.”

But when toll access publishing is globally supplemented by mandatory Green OA self-archiving, the “prohibition” is pre-empted, at next to no extra cost.

Latest Article Alert from BMC Public Health

The latest articles from BMC Public Health, published between 08-Dec-2012 and 15-Dec-2012

For articles which have only just been published, you will see a ‘provisional PDF’ corresponding to the accepted manuscript.
A fully formatted PDF and full text (HTML) version will be made available soon.

Study protocol
Protocol for ADDITION-PRO: a longitudinal cohort study of the cardiovascular experience

AMI2 Content mining using PDF and SVG: progress

I’m now returning to UK for a few weeks before coming back to AU to continue. This is a longish post but important for anyone wanting to know the details of how we build an intelligent PDF reader and what it will be able to do. Although the examples are chemistry-flavoured the approach applies to a wide range of science.

To recall…

AMI2 is a project to build an intelligent reader of the STM literature. The base is PDF documents (though Word, HTML and LaTeX will also be possible and much easier and of higher quality). There are three phases at present (though this and the names may change):

  • PDF2SVG. This converts good PDF losslessly into SVG characters, path and images. It works well for (say) student theses and ArXiV submissions but fails for most STM publisher PDFs because the quality of the “typesetting ” is non-conformant and we have to use clunky, fragile heuristics. More on later blogs and below.
  • SVGPLUS. This turns low-level SVG primitives (characters and paths) into higher level a-scientific objects such as paragraphs, sections, word, subscripts, rectangles, polylines, circles, etc. In addition it analyses components that are found universally in science (figures, tables, maths equations) and scientific document structure. It also identifies graphs, plots, etc. (but not chemistry, sequences, trees…)
  • SVG2XML. This interprets SVGPLUS output as science. At present we have prototyped chemistry, phylogenetics, spectroscopy and have a plugin architecture that others can build on. The use of SVG primitives makes this architecture much simpler.

We’ve written a report and here are salient bits. It’s longish so mainly for those interested in the details. But it has a few pictures…

PDFs and their interpretation by PDF2SVG

 

Science is universally published as PDF documents, usually created by machine and human transformation of Word or LaTeX documents. Almost all major publishers regard “the PDF” as the primary product (version of record) and most scientists read and copy PDFs directly from the publishers’ web sites; the technology is independent of whether this is Open or closed access. Most scientists read, print and store large numbers of PDFs locally to support their research.

PDF was designed for humans to read and print, not for semantic use. It is primarily “electronic paper” – all that can be guaranteed is coloured marks on “e-paper”. It was originally proprietary and has only fairly recently become an ISO standard. Much of the existing technology is proprietary and undocumented. By default, therefore a PDF only conveys information to a sighted human who understands the human semantics of the marks-on-paper.

Over 2 million scholarly publications are published each year, most only easily available in PDF. The scientific information in them is largely lost without an expert human reader, who often has to transcribe the information manually (taking huge time and effort). Some examples:

In a PDF these are essentially black dots on paper. We must develop methods to:

  • PDF2SVG: Identify the primitives (in this case characters, and symbols). This should be fairly easy but because the technical standard of STM publishing is universally very non-conformant to standards (i.e. “poor”) we have had to create a large number of arbitrary rules. This non-conformity is a major technical problem and would be largely removed by the use of UTF-8 and Unicode standards.
  • . SVGPLUS (and below): Understand the words (e.g. that “F”-”I”-”g” and “E”-”x”-”c”-”e”-”s”-”s” are words). PDF has no concept of “word”, “sentence”, “paragraph”, etc.
  • Detect that this is a Figure (e.g. by interpreting “Fig. “)
  • Separate the caption from the plot
  • Determine the axial information (labels, numbers and tics and interpret (or here guess) units
  • Extracts the coordinates of points (black circles)
  • Extract the coordinates of the line

If the PDF is standards-compliant it is straightforward to create the SVG. We use the Open Source PDFBox from Apache to “draw” to a virtual graphics device. We intercept these graphics calls and extract information on:

  • Position and orientation. PDF objects have x,y coordinates and can be indefinitely grouped (including scaling). PDF resolves all of this into a document on a virtual A4 page (or whatever else is used). The objects also have style attributes (stroke and fill colours, stroke-widths , etc.). Most scientific authors use simple colours and clean lines which makes the analysis easier.
  • Text (characters). Almost all text is individual characters which can be in any order (“The” might be rendered in the order “e”-”h”-”T”. Words are created knowing the screen positions of their characters. In principle all scientific text (mathematical equations, chemical symbols, etc.) can be provided in the Unicode toolset (e.g. a reversible chemical reaction symbol

    is the Unicode point U+21CC or html entity &#x21cc and will render as such in all modern browsers.

  • Images. These are bitmaps (normally rectangular arrays of pixels) and can be transported as PNG, GIF, JPEG, TIFF, etc. There are cases (e.g. photographs of people or scientific objects) where bitmaps are unavoidable. However some publishers and authors encode semantic information as bitmaps, thereby destroying it. Here is an example:

    Notice how the lines are fuzzy (although the author drew them cleanly). It is MUCH harder to interpret such a diagram than if it had been encoded as characters and lines. Interpretation of bitmaps is highly domain-dependent and usually very difficult or impossible. Here is another (JPEG)

    Note the fuzziness which is solely created by the JPEG (lossy) compression. Many OCR tools will fail on such poor quality material

  • Path (graphics primitives). These are used for objects such as
    • graphical plots (x-y, scatterplots, bar charts)
    • chemical structures

      This scheme, if drawn with clean lines, is completely interpretable by our software as chemical objects

    • diagrams of apparatus
    • flowcharts and other diagrams expressing relationships

    Paths define only Move, Line, Curve. To detect a rectangle SVGPLUS has to interpret these commands (e.g. MLLLL).

There are, unfortunately, a large occurrence of errors and uncertainties. The most common is the use of non-standard, non-documented encodings for characters. These come from proprietary tools (such as Font providers of TeX, etc,) and from contracted typesetters. In these cases we have to cascade down:

  • Guess the encoding (often Unicode-like)
  • Create a per-font mapping of names to Unicode. Thus “MathematicalPi-One” is a commonly used tool for math symbols: its “H11001″ is drawn as a PLUS and we translate to Unicode U+002B but there is no public (or private) translation table (We’ve asked widely). So we have to do this manually by comparing glyphs (the printed symbol) to tables of Unicode glyphs. There are about 20 different “de facto” fonts and symbol sets in wide scientific use and we have to map them manually (maybe while watching boring cricket on TV). We have probably done about 60% of what is required
  • Deconstruct the glyphs. Ultimately the PDF provides the graphical representation of a glyph on the screen, either as vectors or as a bitmap. We recently discovered a service (shapecatcher) which interprets up to 11,000 Unicode glyphs and is a great help. Murray Jensen has also written a glyph browser which cuts down the human time very considerably.
  • Apply heuristics. Sometimes authors or typesetters use the wrong glyph or kludge it visually. Here’s an example:

    Most readers would read as “ten-to-the-minus-seven” but the characters are actually “1″, “0″, EM-DASH, “7″. EM-DASH – which is used to separate clauses like this – is not a mathematical sign so it’s seriously WRONG to use it. We have to add heuristics (a la UNIX lint) to detect and possibly correct. Here’s worse. There’s a perfectly good Unicode symbol for NOT-EQUALS (U+2260)

    Unfortunately some typsetters will superimpose an EQUALS SIGN (=)with a SLASH (/). This is barbaric and hard and tedious to detect and resolve. The continued development of PDF2SVG and SVGPLUS will probably be largely hacks of this sort.

SVG and reconstruction to semantic documents SVGPLUS

 

SVGPLUS assumes a correct SVG input of Unicode characters, SVG Paths, and SVGImages (the latter it renders faithfully and leaves alone). The task is driven by a control file in a declarative command language expressed in XML. We have found this to be the best method of representing the control, while preserving flexibility. It has the advantage of being easily customisable by users and because it is semantic can be searched or manipulated. A simple example:

<semanticDocument xmlns=”http://www.xml-cml.org/schema/ami2″>

<documentIterator filename=”org/xmlcml/svgplus/action/ “>

<pageIterator>

<variable name=”p.root” value=”${d.outputDir}/whitespace_${p.page}” type=”file”/>

<whitespaceChunker depth=”3″ />

<boxDrawer xpath=”//svg:g[@LEAF=’3′]” stroke=”red” strokeWidth=”1″ fill=”#yellow” opacity=”0.2″ />

<pageWriter filename=”${p.root}_end.svg” />

</pageIterator>

</documentIterator>

</semanticDocument>

 

This document identifies the directory to use for the PDFs (“action”), iterates over each PDF it finds, creates (SVG) pages for each, processes each of those with a whitespaceChunker (v.i.) and draws boxes round the result and writes each page to file. (There are many more components in SVGPLUS for analysing figures, etc). A typical example is:

 

SVGPLUS has detected the whitespace-separated chunks and drawn boxes round the “chunks”. This is the start of the semantic document analysis. This follows a scheme:

  • Detect text chunks and detect the font sizes.
  • Sort into lines by Y coordinate and sort within lines by X coordinate. The following has 5 / 6 lines:

     

     

    Normal, superscript, normal, subscript (subscript), normal

  • Find the spaces (PDF often has no explicit space characters – the spaces have to be calculated by intercharacter distance. This is not standard and is affected by justification and kerning.
  • Interpret variable font-size as sub- and super-scripts.
  • Manage super-characters such as the SIGMA.
  • Join lines. In general one line can be joined to the next by adding a space. Hyphens are left as their interpretation depends on humans and culture. The output would thus be something like:

    the synthesis of blocks, stars, or other polymers of com~plex architecture. New materials that have the potential of revolutionizing a large part …

    This is the first place at which words appear.

  • Create paragraphs. This is through indentation heuristics and trailing characters (e.g. FULL STOP).
  • Create sections and subsections. This is normally through bold headings and additional whitespace. Example:

    Here the semantics are a section (History of RAFT) containing two paragraphs

 

The PATH interpretation is equally complex and heuristic. In the example below:

The reversible reaction is made up of two ML paths (“lines”) and two filled curves (“arrowheads”). All this has to be heuristically determined. The arcs are simple CURVE-paths. (Note the blank squares are non-Unicode points)

 

In the axes of the plot

All the tick-marks are independent paths – SVGPLUS has to infer heuristically that it is an axis.

In some diagrams there is significant text:

Here text and graphical primitives are mixed and have to be separated and analysed.

 

In summary SVVGPLUS consists of a large number of heuristics which will reconstruct a large proportion (but not all) scientific articles into semantic documents. The semantic s do and will include:

  • Overall sectioning (bibliographic metadata, introduction, discussion, experimental, references/citations
  • Identification and extraction of discrete Tables, Figures, Schemes
  • Inline bibliographic references (e.g. superscripted)
  • Reconstruction of tables into column-based object(where possible)
  • Reconstruction of figures into caption and graphics
  • Possible interpretation of certain common abstract scientific graphical objects (graphs, bar charts)
  • Identification of chemical formulae and equations
  • Identification of mathematical equations

There will be no scientific interpretation of these objects

 

Domain specific scientific interpretation of semantic documents

 

This is being developed as a plugin-architecture for SVGPLUS. The intention is that a community develops pragmatics and heuristics for interpreting specific chunks of the document in a domain specific manner. We and our collaborators will develop plugins for translating documents into CML/RDF:

  • Chemical formulae and reaction schemes
  • Chemical synthetic procedures
  • Spectra (especially NMR and IR)
  • Crystallography
  • Graphical plots of properties (e.g. variation with temperature, pressure, field, molar mass, etc.)

More generally we expect our collaborators (e.g. Ross Mounce, Panton Fellow, paleophylogenetics at University of Bath UK) to develop:

  • Mathematical equations (into MathML).
  • Phylogenetic trees (into NEXML)
  • NA and protein sequences into standard formats
  • Dose-response curves
  • Box-plots

 

Fidelity of SVG rendering in PDF2SVG. This includes one of the very rare bugs we cannot solve:

PDF:

SVG

[Note that the equations are identical apart from the braces which are mispositioned and too small. There is no indication in TextPosition as to where this scaling comes from.

In PDFReader the equation is correctly displayed (the text is very small so the screenshot is blurry. Nonetheless it’s possible to see that the brackets are correct)

 

 

The Carrot and the Stick?

Unraveling Motivation and Attention

by Randolph S. Marshall, Career Corner Editor

A commentary on the recent Brain and Behavior article, “Effects of Motivation on Reward and Attentional Networks: an fMRI Study”, by Ivanov et al.

How does the anticipation of a reward interact with cognitive demand? This is the basic question that was asked by K-23 awardee IIllyan Ivanov. In his article just published in Brain and Behavior, Ivanov and colleagues used BOLD fMRI to examine regional brain activation in a 3-pronged experiment that pitted the motivational system against the attentional system. Both the motivation of an anticipated reward and higher levels of attention are known to speed up cognitive reaction times behaviorally, but what is the influence of the motivational system on cognitive control as a task requires more cognitive muscle? Does reward anticipation enhance performance or interfere with it?  What if there is not only promise of reward, but risk of monetary loss? These questions are important both for our understanding of systems biology, and for implications of treatment of individuals with attention deficit/hyperactivity disorder, obsessive-compulsive disorder, and drug addiction where attention and motivation may be altered.

In this study of 16 healthy adults, behavior results were as anticipated: shorter reaction times were seen with reward anticipation, particularly with the easier, “congruent” task trials. The imaging results confirmed that attentional network regions (right ACC, right primary motor cortex, supplemental motor and somatosensory association cortices bilaterally, right middle frontal gyrus and right thalamus) activated more during the higher cognitive demands of the non-congruent trials whereas key components of the motivational network (bilateral insula and ventral striatum) engaged with the unique “surprising non-reward” component of the task. Furthermore, the interaction effects showed that cognitive conflict elicited greater activation, but only in the absence of reward incentives – as if subjects worked harder to avoid possible loss. Conversely, reward anticipation decreased activity in the attentional networks possibly due to improved information processing.

Surprisingly, the more difficult task components decreased activity in the striatum and the orbito-frontal cortex suggesting that harder trails may have been experienced as less rewarding. These results were interpreted as showing that in the context of a difficult task one can maximize performance through both increasing efforts to obtain rewards on easier trials and committing more attentional effort to avoid punishment and losses during more difficult trials. The authors conclude that there is not a direct correlation between motivational incentives and improvement of performance, but that their interplay will highly depend on the context.

I interviewed Dr. Ivanov about his experiment, and asked him to talk about the process of beginning his career in clinical neuroscience. Dr. Ivanov is currently Assistant Professor in Child Psychiatry at Mt. Sinai Medical Center in New York. He completed a K-23/R02 sponsored by grant in 2010, sponsored by NIDA/AACAP and now is completing his work on an R03 to study the effects of motivation and attention in more depth.

Marshall:  What was the most interesting finding for you in this study?

Ivanov: The interaction effect, which suggested that incentives may boost information processing but can also be a distractor and possibly hamper performance on cognitive tasks. This is interesting because new studies suggest that if you have strong stimuli (e.g. a drug like methylphenidate) this interaction effect may be reversed as we hope to show in a follow up study.

Marshall: Was clinical relevance an important motivator for you in pursuing this project, or were you more interested in the systems biology aspect?

Ivanov: I would say both. As a clinician I was interested in the main idea which was whether we could tap into risk factors that would help us understand the motivational and attentional systems. I wanted to know if there is a biological signature or hallmark for what treatment might be helpful in children at risk for later substance abuse.

Marshall: How important was mentorship in the design and implementation of this work?

Ivanov: Crucial, especially with neuroimaging.  The amount of time and the amount of knowledge needed was very high.  I had both inside and outside mentors. I studied with the Director of Child Psychiatry at Mt. Sinai, Jeffrey Newcorn, and with outside mentors also, which turned out to be a very good thing. I worked with Tom Crowley, an adult psychiatrist at Denver, and Edith London from UCLA, who was a mentor for my K-23.  I also went to the Wellcome Trust Centre for Neuroimaging in London a couple of times to work with Karl Friston. Through this process what you find is that you accumulate a group of people around the country or the world who you can then count on later for advice and support.

Marshall: What was the hardest part about getting this project done?

Ivanov: I didn’t know much about neuroimaging when I started. I was naïve about the time needed to complete a neuroimaging study in young children. It’s not like clinical work, in which we get used to working quickly.  Getting used to working in that scientific environment is different.  It is also very demanding moving humans into human research, particularly youths. You have to work with kids and family through the whole process.  Children have their natural curiosity, but entering the fMRI scanner is not an everyday experience and they can be fearful – having a skilled research team is crucial.

Marshall: What is the next hypothesis to test? Is it a direct follow up of this project or will you work on a parallel project?

Ivanov: We may be able to set up a treatment trial. We want to ask, do you see clinical   subgroups with particular biological signatures that might optimize our treatments for high risk groups.

Marshall: What advice would you give a young investigator looking to get a first K-award or similar grant funded?

Ivanov: Get a good mentor. A good mentor will help flesh out your ideas.  Also, you have to find an area you are really interested in and feel really passionate about.  And when you start thinking about the process, don’t have the goal right away of producing the paper that will turn science around.  Concentrate on learning, increasing your background knowledge, and developing your network. The best outcome for the K is to develop the confidence and skills that will let you succeed in the future.

Going to the ASCB Annual Meeting? PLOS would like to meet you!

MA104 cells labelled with actin (green) and DNA (blue). Image credit: PLoS ONE 7(10): e47612. doi:10.1371/journal.pone.0047612

Are you attending the upcoming Annual meeting of the American Society for Cell Biology?  Then we want to meet you in person!  PLOS ONE has published thousands of papers in the field of Cell Biology, so we know there must be a lot of PLOS ONE authors out there.  Whether you are an editor, reviewer, author or prospective author, we hope to see you! For more information about where we’ll be and when, please read on.

 

An evening with the PLOS Editorial Boards:

PLOS is hosting a reception for all Editorial Board members for an evening of food, drink and discussion.  It will be a great opportunity to connect with your fellow Editors, and a few staff Editors will also be on hand.  The highlight of the evening will be speakers Emma Ganley and Jason Swedlow, focusing on the challenges and importance of sharing data in the world of cell biology.

  • Emma Ganley is a Senior Editor on PLOS Biology, with experience in data availability and navigation in online publication. 
  • Jason Swedlow is co-founder of Open Microscopy Environment (OME), and directs his own research group at the University of Dundee.

When: 6 to 8pm , Tuesday, December 18, 2012

Where: The Box [link] – 1069 Howard Street (between 6th & 7th), San Francisco, CA 94103

Be sure to RSVP, because space is limited: http://scibar.eventbrite.com

Get in touch if you would like further information or have any questions!

 

Calling all PLOS ONE authors to the PLOS booth in the Exhibition Hall!

Have you published with PLOS ONE?  Come by booth #1322!  We would love to show you your article level metrics in exchange for a t-shirt!  Find out who has cited your work, how many people are using it in their Mendeley library, and the number of times the pdf has been downloaded (among many other things).  PLOS ONE staff will be on hand to discuss the benefits of publishing with PLOS, and to answer all of your questions both specific and general.

We look forward to meeting you!

 

 

The Evolution of Author Guidelines

Congratulations are due to PeerJ for succeeding in bringing into focus an essential publisher service that has been little publicised in the past.

The journal opened for submissions on December 3rd, and many tweets and blogs have been spawned by the following passage in the Instructions for Authors:

We want authors spending their time doing science, not formatting.

We include reference formatting as a guide to make it easier for editors, reviewers, and PrePrint readers, but will not strictly enforce the specific formatting rules as long as the full citation is clear.

Styles will be normalized by us if your manuscript is accepted.

Of course, it would be ridiculous to assert that every manuscript ever submitted up to this point had perfectly formatted references in journal style; in fact it is relatively rare to make no edits at all on a reference list. Journal Production Editors have been converting reference formats since journal publishing began; laboriously at first, but the digital revolution has certainly helped in recent years, with more automated processes and specialist typesetters taking on much of the tedium.

 As the PeerJ guidelines correctly state, a requirement for a particular style can help the editorial and review process, and I would go further in saying that it can impose some rigour on the creation of the reference list, helping to ensure that all critical elements are present. However, it has been the case for some time that publishers have barely batted an eye if an article happens to arrive in the incorrect format, as long as all of the important content was present.

 At Wiley, we took this a stage further on the launch of our Wiley Open Access program back in May 2011. We made a point of paring the formatting requirements down to a bare minimum for the entire article. The Author Guidelines state:

 We place very few restrictions on the way in which you prepare your article, and it is not necessary to try to replicate the layout of the journal in your submission. We ask only that you consider your reviewers by supplying your manuscript in a clear, generic and readable layout, and ensure that all relevant sections are included. Our production process will take care of all aspects of formatting and style.

And with respect to the references:

 As with the main body of text, the completeness and content of your reference list is more important than the format chosen. A clear and consistent, generic style will assist the accuracy of our production processes and produce the highest quality published work, but it is not necessary to try to replicate the journal’s own style, which is applied during the production process. If you use bibliographic software to generate your reference list, select a standard output style, and check that it produces full and comprehensive reference listings…The final journal output will use the ‘Harvard’ style of reference citation. If your manuscript has already been prepared using the ‘Vancouver’ system, we are quite happy to receive it in this form. We will perform the conversion from one system to the other during the production process.

There is no doubt that this service, which has been quietly in operation in most journals for some time, has now been thrown much more into the limelight, and this can only be positive because it showcases one of the valuable services that professional publishing can provide.

Reading through the blogs, I see that the more overt adoption of this service as a point of policy is already spreading to more journals, as it has to eLife, and Elsevier’s Free Radical Biology & Medicine.

 This can only be a good thing.

Will Wilcox, Journals Content Management Director for Life Sciences

Upgrade to SHERPA/JULIET Released

The Centre for Research Communications is pleased to announce the release of an upgrade to its SHERPA/JULIET service, the go-to database of research funders’ open access policies – http://www.sherpa.ac.uk/juliet/.

SHERPA/JULIET now has grown to cover 110 funders.

Growth of the SHERPA/JULIET database to 2012-12-12

The increase in size has necessitated an upgrade to the JULIET website and the introduction of several new features, including:

  • Redesign of the look and feel of the website to match the JULIET’s partner service RoMEO – the database of publishers’ copyright and open access policies.
  • The introduction of a search interface, in addition to the existing “browse” list. This allows you to search by funder’s name and country. In “advanced mode”, you can also search according to the funders’ policy requirements for open access publications, and the archiving of publications and data.
  • New statistical charts. While the current focus of JULIET is on the United Kingdom, we are extending coverage to the rest of the world.
  • A prototype Application Programmers’ Interface (API).
  • Lists of new additions and news stories.

JULIET is currently funded by JISC via UK RepositoryNet+ (www.repositorynet.ac.uk/).

Peter

Latest Article Alert from Environmental Health

The latest articles from Environmental Health, published between 29-Nov-2012 and 13-Dec-2012

For articles which have only just been published, you will see a ‘provisional PDF’ corresponding to the accepted manuscript.
A fully formatted PDF and full text (HTML) version will be made available soon.

Research
Air pollution, fetal and infant tobacco smoke exposure, and wheezing in preschool children