Abstract: The main purpose of this chapter is to report how researchers investigating in the area of e-Infrastructures organize their activities of “data and publication management” and themselves rely on research infrastructures to do so. Due to the early age of this field and its rather multidisciplinary computer science character, no well-established research infrastructure is available and researchers tend to follow “infrastructure-flavoured” solutions local to their organizations. As a consequence, the authors of this chapter (from the DLib research group at CNR, Italy and the MADGIK research group at the University of Athens, Greece) opted to approach this study by collecting a number of experiences from relevant stakeholders in the field in order to identify “local infrastructure” commonalities and “research infrastructure” desiderata.
“Welcome to CogPrints, an electronic archive for self-archive papers in any area of Psychology, Neuroscience, and Linguistics, and many areas of Computer Science (e.g., artificial intelligence, robotics, vison, learning, speech, neural networks), Philosophy (e.g., mind, language, knowledge, science, logic), Biology (e.g., ethology, behavioral ecology, sociobiology, behaviour genetics, evolutionary theory), Medicine (e.g., Psychiatry, Neurology, human genetics, Imaging), Anthropology (e.g., primatology, cognitive ethnology, archeology, paleontology), as well as any other portions of the physical, social and mathematical sciences that are pertinent to the study of cognition….”
“Goal 1: Develop an evidence based understanding of current best practices in publishing across computing science.
Recent examples of reflection on peer review, which demonstrated significant variation in accept/reject decisions made by program committees (NIPS), and initiatives such as ACM Artefact Review and SIGCHI RepliCHI Award, show a desire from the research community to improve research and publication practice. This working group will collate an evidence base from the computing science community, bringing together currently disparate efforts in this area. Our on-going survey of practice will be publicised through a blog aimed at computing science researchers and practitioners.
Goal 2: Re-imagine a publishing and dissemination culture that exemplifies the values of open access, open data, and rigour.
Values in publication are changing, with more support than ever for open access, open data, transparency, and accessibility. Often, these values are also mandated by funding bodies that spend public money. We will develop concepts for a modern approach to knowledge sharing that could support new reviewing processes, enable multimedia archives and resources, incentivise reproducibility and open practices based on empirical evidence.
Goal 3: Advocate for change in publishing practice based on empirical evidence and ethical values.
This working group will develop channels to put these concepts into practice. We will disseminate our results to SIG leaders and through the Publications Board to enact change in how publishing practice occurs throughout ACM….”
“Stuart M. Shieber, the James O. Welch, Jr. and Virginia B. Welch Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), has been named a fellow of the Association for Computational Linguistics….As Faculty Director of Harvard’s Office for Scholarly Communication, Shieber has also led Harvard’s efforts to institute open-access policies that are now emulated elsewhere….”
“With the increased interest in computational sciences, machine learning (ML), pattern recognition (PR) and big data, governmental agencies, academia and manufacturers are overwhelmed by the constant influx of new algorithms and techniques promising improved performance, generalization and robustness. Sadly, result reproducibility is often an overlooked feature accompanying original research publications, competitions and benchmark evaluations. The main reasons behind such a gap arise from natural complications in research and development in this area: the distribution of data may be a sensitive issue; software frameworks are difficult to install and maintain; Test protocols may involve a potentially large set of intricate steps which are difficult to handle. Given the raising complexity of research challenges and the constant increase in data volume, the conditions for achieving reproducible research in the domain are also increasingly difficult to meet. To bridge this gap, we built an open platform for research in computational sciences related to pattern recognition and machine learning, to help on the development, reproducibility and certification of results obtained in the field. By making use of such a system, academic, governmental or industrial organizations enable users to easily and socially develop processing toolchains, re-use data, algorithms, workflows and compare results from distinct algorithms and/or parameterizations with minimal effort. This article presents such a platform and discusses some of its key features, uses and limitations. We overview a currently operational prototype and provide design insights.”
“Harvard’s School of Engineering and Applied Sciences (SEAS) is pleased to announce a pilot project recommending to faculty engaged in a review, promotion, or tenure process to use Harvard’s open-access repository DASH (Digital Access to Scholarship at Harvard) as part of their preparations. There are two benefits. First, DASH will make the faculty member’s work more widely and easily accessible to potential participants in a review process. Second, it will provide open access to a larger part of the research output of SEAS faculty members.
SEAS is part of the Harvard Faculty of Arts and Sciences, which unanimously adopted an open-access policy in 2008, asking faculty to deposit their new scholarly articles in DASH. SEAS strongly supports this policy and sees this program as one more incentive to help implement the policy. This recommendation does not change the review, promotion, and tenure criteria or standards at SEAS, and preserves faculty freedom to submit scholarly work to the publishers of their choice.”
“Recently, the participants of the Conference on Computational Complexity (CCC)—the latest iteration of which I’ll be speaking at next week in Vancouver—voted to declare their independence from the IEEE, and to become a solo, researcher-organized conference. See this open letter for the reasons why (basically, IEEE charged a huge overhead, didn’t allow open access to the proceedings, and increased rather than decreased the administrative burden on the organizers). As a former member of the CCC Steering Committee, I’m in violent agreement with this move, and only wish we’d managed to do it sooner….”