Artificial Intelligence in Medicine!

AI in Medicine – Who is Responsible?

[By staff reporters]



3 Responses

  1. Big Tech’s guide to talking about AI ethics

    AI researchers often say good machine learning is really more art than science. The same could be said for effective public relations. Selecting the right words to strike a positive tone or reframe the conversation about AI is a delicate task: done well, it can strengthen one’s brand image, but done poorly, it can trigger an even greater backlash.

    The tech giants would know. Over the last few years, they’ve had to learn this art quickly as they’ve faced increasing public distrust of their actions and intensifying criticism about their AI research and technologies.

    Now they’ve developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly—but want to make sure they don’t invite too much scrutiny. Here’s an insider’s guide to decoding their language and challenging the assumptions and values baked in.

    via Karen Hao


  2. What’s an algorithm?

    It depends who you ask – but often the word is used to deflect accountability for human decisions. It suggests a system that is highly complex—perhaps so complex that a human would struggle to understand its inner workings or anticipate its behavior when deployed.

    But the term shouldn’t be used as a shield to absolve the humans who designed it of any consequences of its use.



  3. AI in Medicine

    Many thanks, Sherman. And now, AI in medicine. Who is responsible?

    Dr. David E. Marcinko MBA


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: