AI in Medicine – Who is Responsible?
[By staff reporters]
https://qz.com/1905712/when-ai-in-healthcare-goes-wrong-who-is-responsible-2/
Filed under: iMBA, Inc., Information Technology | Tagged: AI, AI in medicine, HIT, IT |
The “Medical Executive-Post” is about connecting doctors, health care executives and modern consulting advisors. It’s about free-enterprise, business, practice, policy, personal financial planning and wealth building capitalism. We have an attitude that’s independent, outspoken, intelligent and so Next-Gen; often edgy, usually controversial. And, our consultants “got fly”, just like U. Read it! Write it! Post it! “Medical Executive-Post”. Call or email us for your FREE advertising and sales consultation TODAY [770.448.0769]
Suite #5901 Wilbanks Drive, Norcross, Georgia, 30092 USA [1.770.448.0769]. Our location is real and we are now virtually enabled to assist new long distance clients and out-of-town colleagues.
If you want the opportunity to work with leading health care industry insiders, innovators and watchers, the “ME-P” may be right for you? We are unbiased and operate at the nexus of theoretical and applied R&D. Collaborate with us and you’ll put your brand in front of a smart & tightly focused demographic; one at the forefront of our emerging healthcare free marketplace of informed and professional “movers and shakers.” Our Ad Rate Card is available upon request [770-448-0769].
AI in Medicine – Who is Responsible?
[By staff reporters]
Filed under: iMBA, Inc., Information Technology | Tagged: AI, AI in medicine, HIT, IT |
Blog at WordPress.com. WP Designer.
Big Tech’s guide to talking about AI ethics
AI researchers often say good machine learning is really more art than science. The same could be said for effective public relations. Selecting the right words to strike a positive tone or reframe the conversation about AI is a delicate task: done well, it can strengthen one’s brand image, but done poorly, it can trigger an even greater backlash.
The tech giants would know. Over the last few years, they’ve had to learn this art quickly as they’ve faced increasing public distrust of their actions and intensifying criticism about their AI research and technologies.
https://www.technologyreview.com/2021/04/13/1022568/big-tech-ai-ethics-guide/?truid=349b552221c994e2540a304649746d7c&utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=&utm_content=04-13-2021&mc_cid=685c8d6033&mc_eid=72aee829ad
Now they’ve developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly—but want to make sure they don’t invite too much scrutiny. Here’s an insider’s guide to decoding their language and challenging the assumptions and values baked in.
via Karen Hao
LikeLike
What’s an algorithm?
It depends who you ask – but often the word is used to deflect accountability for human decisions. It suggests a system that is highly complex—perhaps so complex that a human would struggle to understand its inner workings or anticipate its behavior when deployed.
But the term shouldn’t be used as a shield to absolve the humans who designed it of any consequences of its use.
Sherman
LikeLike
AI in Medicine
Many thanks, Sherman. And now, AI in medicine. Who is responsible?
https://qz.com/1905712/when-ai-in-healthcare-goes-wrong-who-is-responsible-2/
Dr. David E. Marcinko MBA
LikeLike