MEDICAL: Artificial Intelligence in EHRs


By White Hat Anonymous

Epic Systems, the country’s leading e-health record company, says an algorithm it developed can accurately flag sepsis in patients 76% of the time. The life-threatening disease, which arises from infections, is a major concern for hospitals: One-third of patients who die in hospitals have sepsis, per the CDC. 

  • Generally, the earlier sepsis is diagnosed and treated, the better a patient’s chances of survival—and hundreds of hospitals use Epic Systems’s sepsis prediction model, The Verge reports. 

The problem: According to a study published this week in JAMA Internal Medicine, Epic Systems may have gotten the success rate wrong: The model is only correct 63% of the time—“substantially worse than the performance reported by its developer,” the researchers wrote. 

  • Part of the issue can be traced to the algorithm’s development, Stat News reports. It was trained to flag when doctors would submit bills for sepsis treatment—which doesn’t always line up with patients’ first signs of symptoms. 
  • “It’s essentially trying to predict what physicians are already doing,” Dr. Karandeep Singh, study author.

See the source image

When reached for comment, Epic Systems told us the researchers’ hypothetical scenario lacked “the required validation, analysis, and tuning that organizations need to do before deployment,” adding that the JAMA study’s findings differed from other research. 



Bottom line: Algorithms can augment healthcare, but the life-or-death nature of their use requires serious due diligence.

ASSESSMENT: Your thoughts are appreciated



2 Responses

  1. AI Elsewhere

    After surveying 42 federal agencies that employ law enforcement officers about their use of facial recognition tech, here’s what the Government Accountability Office found.

    Between January 2015 and March 2020, at least 20 agencies owned their own facial recognition systems or used systems owned by others. Those “others” included controversial software like Clearview AI.

    Of the 15 agencies that used non-federal FRT software, only one agency (ICE) was aware of which systems were used by employees. The other agencies didn’t track that information, according to the report, meaning they had no list of approved FRT software—and virtually no accountability framework.

    Case study: Six federal agencies used FRT during last summer’s Black Lives Matter protests, in attempts to identify people in images of “civil unrest, riots, or protests.”

    For example, the US Postal Inspection Service used Clearview AI’s software in investigations related to damaging USPS property, stealing mail, and more.

    Three federal agencies also used facial recognition software to identify people at the US Capitol attack on Jan. 6.

    Solutions, solutions: Since most agencies using FRT had no accountability standards, the GAO made its own recommendations in a separate report, including introducing a tracking mechanism to identify which systems are being used and assess the risks.

    Zoom out: Proposed legislation, including the recently revived Facial Recognition and Biometric Technology Moratorium Act, would ban federal agencies from using FRT and other biometric technology entirely.




    Edge computing is reshaping health care by bringing big data processing and storage closer to the source, to support game-changing technologies such as the internet of things, artificial intelligence, and robotics.

    Dr. David E. Marcinko MBA


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: