Electronic health records software often written without doctors’ input

By Kathryn Doyle (Reuters Health) - The reason why many doctors find electronic health records (EHR) difficult to use might be that the software wasn't properly tested, researchers suggests. Current guidelines and industry standards suggest that new EHR software should be tested by at least 15 end users with a clinical background to make sure they are usable and safe before they get federal certification. But a new study finds that many certified products did not actually conduct this user testing, or did so without clinical testers. Despite the guidelines, “there’s no explicit requirement for end user testers to have a clinical background,” said Raj M. Ratwani of MedStar Health in Washington, D.C., who worked on the study. Testing with 15 participants will capture about 90% of usability issues, but many of the certified products in this study involved fewer people, Ratwani told Reuters Health by phone. The researchers studied publicly available records of usability testing for 41 EHRs that were certified by the U.S. Department of Health and Human Services’ Office of the National Coordinator for Health Information Technology (ONC). The vendors reported what usability tests were used, and the results. The researchers only looked at one of the eight usability parameters that should be tested: computerized provider order entry (CPOE). The order entry tool is most often used by doctors and nurses to select patients and prescribe medications. Usability problems with CPOE can result in wrong medication orders and patient harm, Ratwani said. “From a patient’s perspective this directly impacts us heavily,” he said. Of the 41 vendor reports, 14 did not describe their user-centered design process at all. Of those who did describe their process, 19 used an industry standard process and six used one developed internally by the vendor. On average, the authors reported September 8 in JAMA, the various EHRs were each tested by 14 participants. More than half were tested on fewer than 15 people, and only nine used 15 people with clinical backgrounds. One vendor used no clinical participants, seven had no physician testers, and two used employees at the vendor company. Others did not provide enough information to determine if any of the testers were physicians. In a phone call with Reuters Health, e-health expert Farah Magrabi of Macquarie University in Sydney, Australia listed potential usability problems with EHRs. “In order entry systems that require the user to scroll through a drop down menu with many options, the options may not be arranged in an intuitive manner,” Magrabi said. Also, when searching for test result, the critical results may not be highlighted appropriately. Other common problems include frequent system refreshes, Ratwani said. “If you’re about to click on a patient’s name, the system refreshes and you may click on the wrong patient’s name. You may get the lab test for a different patient entirely.” “It’s difficult to point the finger at one stakeholder,” Ratwani said. “Vendors need to be employing more rigorous processes, and authorized certification bodies, groups charged with reviewing the results of reports and certification of products, need to have clearer guidelines.” “This problem has been around for as long as we’ve had EHRs,” Magrabi said. “People can think about being able to pick up their smartphone and use it without needing a half day of training, but you need that for EHRs,” she said. SOURCE: http://bit.ly/1EMMHut JAMA 2015.