An open access medical knowledge base for community driven diagnostic decision support system development | BMC Medical Informatics and Decision Making | Full Text

Abstract:  Introduction

While early diagnostic decision support systems were built around knowledge bases, more recent systems employ machine learning to consume large amounts of health data. We argue curated knowledge bases will remain an important component of future diagnostic decision support systems by providing ground truth and facilitating explainable human-computer interaction, but that prototype development is hampered by the lack of freely available computable knowledge bases.

Methods

We constructed an open access knowledge base and evaluated its potential in the context of a prototype decision support system. We developed a modified set-covering algorithm to benchmark the performance of our knowledge base compared to existing platforms. Testing was based on case reports from selected literature and medical student preparatory material.

Results

The knowledge base contains over 2000 ICD-10 coded diseases and 450 RX-Norm coded medications, with over 8000 unique observations encoded as SNOMED or LOINC semantic terms. Using 117 medical cases, we found the accuracy of the knowledge base and test algorithm to be comparable to established diagnostic tools such as Isabel and DXplain. Our prototype, as well as DXplain, showed the correct answer as “best suggestion” in 33% of the cases. While we identified shortcomings during development and evaluation, we found the knowledge base to be a promising platform for decision support systems.

Conclusion

We built and successfully evaluated an open access knowledge base to facilitate the development of new medical diagnostic assistants. This knowledge base can be expanded and curated by users and serve as a starting point to facilitate new technology development and system improvement in many contexts.

PubSweet Collaboration Week – May 7 – 13 : Collaborative Knowledge Foundation

About three years ago, we set out to build a framework for building publishing software with components. Take a component here, a component there, make one on your own, and, presto! you have your custom publishing platform! While the framework itself has matured significantly, we’re not there yet in terms of the available components and how they fit together.

At the same time, we started building a community around this framework, organisations and people looking to innovate within this space and looking for a way to do so. While building a community has to happen in parallel with software development, I think that if you’re doing open source development right, your community will be ahead of the software most of the time. This is certainly the case for us. We envisioned a community that openly shares their experiences and solutions and is willing to collaborate on new ideas, despite basically being competitors, and I can happily (and proudly) say that our community has already reached this ideal.

To close the loop and make PubSweet the go-to framework and component library for developing publishing software, we need to take the lessons from the three systems in production right now (Hindawi’s, EBI’s and eLife’s publishing systems) and incorporate them into PubSweet itself, for everyone to use and benefit from. If we could just get the designers and developers of these systems in the same room, get them to talk to each other, share their custom approaches and try to find commonalities between them… wouldn’t that be awesome? Luckily our community is awesome, and well-versed in that sort of thing, and that’s exactly what’s happening in our event this week!

For the inaugural PubSweet Collaboration Week, starting on May 7th, Coko, EBI, eLife and Hindawi are getting together in Cambridge to make more parts of these systems reusable and add them to PubSweet’s component library….”

The Scientific Paper Is Obsolete. Here’s What’s Next. – The Atlantic

“Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? …

Software is a dynamic medium; paper isn’t. When you think in those terms it does seem strange that research like Strogatz’s, the study of dynamical systems, is so often being shared on paper …

I spoke to Theodore Gray, who has since left Wolfram Research to become a full-time writer. He said that his work on the notebook was in part motivated by the feeling, well formed already by the early 1990s, “that obviously all scientific communication, all technical papers that involve any sort of data or mathematics or modeling or graphs or plots or anything like that, obviously don’t belong on paper. That was just completely obvious in, let’s say, 1990,” he said. …”

 

 

Open Science, Open Source and R

Even when the authors sent you their data, it often didn’t help that much. One of the most common problems was that when you re-analysed their data, you ended up with different answers to it! This turned out to be quite common, because most descriptions of data analyses provided in journal articles are incomplete and ambiguous. What you really needed was the original authors’ source code—an unambiguous and complete record of every data processing step they took, from the raw data files, to the graphs and statistics in the final report….

There’s still one major problem to solve. Publishing your scientific source code is essential for open science, but it’s not enough. For fully open science, you also need the platforms on which that code runs to be open. Without open platforms, the future usability of open-source code is at risk….

The Replication Crisis might have been one of the best things ever to happen to psychology. It became a catalyst for much-needed change to our scientific processes….”

Making the Move to Open Journal Systems 3: Recommendations for a (mostly) painless upgrade

Abstract:  From June 2017 to August 2018, Scholars Portal, a consortial service of the Ontario Council of University Libraries, upgraded 10 different multi-journal instances of the Open Journal Systems (OJS) 3 software, building expertise on the upgrade process along the way. The final and the largest instance to be upgraded was the University of Toronto Libraries, which hosts over 50 journals. In this article, we will discuss the upgrade planning and process, problems encountered along the way, and some best practices in supporting journal teams through the upgrade on a multi-journal instance. We will also include checklists and technical troubleshooting tips to help institutions make their upgrade as smooth and worry-free as possible. Finally, we will go over post-upgrade support strategies and next steps in making the most out of your transition to OJS 3. This article will primarily be useful for institutions hosting instances of OJS 2, but those that have already upgraded, or are considering hosting the software, may find the outlined approach to support and testing helpful.

Repository optimisation & techniques to improve discoverability and web impact : an evaluation – Strathprints

Abstract:  In this contribution we experiment with a suite of repository adjustments and improvements performed on Strathprints, the University of Strathclyde institutional repository powered by EPrints 3.3.13. These adjustments were designed to support improved repository web visibility and user engagement, thereby improving usage. Although the experiments were performed on EPrints it is thought that most of the adopted improvements are equally applicable to any other repository platform. Following preliminary results reported elsewhere, and using Strathprints as a case study, this paper outlines the approaches implemented, reports on comparative search traffic data and usage metrics, and delivers conclusions on the efficacy of the techniques implemented. The evaluation provides persuasive evidence that specific enhancements to technical aspects of a repository can result in significant improvements to repository visibility, resulting in a greater web impact and consequent increases in content usage. COUNTER usage grew by 33% and traffic to Strathprints from Google and Google Scholar was found to increase by 63% and 99% respectively. Other insights from the evaluation are also explored. The results are likely to positively inform the work of repository practitioners and open scientists.

Public Access Submission System on Vimeo

“Johns Hopkins University, Harvard University, MIT, and 221B have developed the Public Access Submission System (PASS), which will support compliance with US funding agencies’ public access policies and institutional open access policies. By combining workflows between the two compliance pathways, PASS facilitates simultaneous submission into funder repositories (e.g., PubMedCentral) and institutional repositories. We intend to integrate a data archive so that researchers can submit cited data at the same time. PASS also features a novel technology stack including Fedora, Ember, JSON-LD, Elasticsearch, ActiveMQ, Java and Shibboleth (with an eye toward multi-institutional support). This talk will include a demonstration of PASS in action. The talk will also outline the steps by which we have engaged the university’s central administration (including the president’s office and the provost’s office) to provide funding, sponsorship for PASS and access to internal grants databases (e.g., COEUS) and engaged US funding agencies including the National Institutes of Health who have offered access to APIs for tracking and correlating submissions, and the National Science Foundation which discussed ways to integrate PASS and their reporting system in the future.”

If Software Is Funded from a Public Source, Its Code Should Be Open Source | Linux Journal

“If we pay for it, we should be able to use it….

But it’s important not to overstate the “free as in beer” element here. All major software projects have associated costs of implementation and support. Departments choosing free software simply because they believe it will save lots of money in obvious ways are likely to be disappointed, and that will be bad for open source’s reputation and future projects….

Moving to open-source solutions does not guarantee that personal data will not leak out, but it does ensure that the problems, once found, can be fixed quickly by government IT departments—something that isn’t the case for closed-source products. This is a powerful reason why public funds should mean open source—or as a site created by the Free Software Foundation Europe puts it: “If it is public money, it should be public code as well”.

The site points out some compelling reasons why any government code produced with public money should be free software. They will all be familiar enough to readers of Linux Journal. For example, publicly funded code that is released as open source can be used by different departments, and even different governments, to solve similar problems. That opens the way for feedback and collaboration, producing better code and faster innovation. And open-source code is automatically available to the people who paid for it—members of the public. They too might be able to offer suggestions for improvement, find bugs or build on it to produce exciting new applications. None of these is possible if government code is kept locked up by companies that write it on behalf of taxpayers….”