@inIVmatics Profile picture

Matt Pfeffer

@inIVmatics

Informatics @ Flatiron Health. Yep, my username is a terrible joke

Similar User
Micky Tripathi photo

@mickytripathi1

Genevieve Morris photo

@HITpolicywonk

Mike photo

@MikeOnFHIR

Grahame Grieve photo

@GrahameGrieve

Josh Mandel, MD photo

@JoshCMandel

Scott Stuewe photo

@StueweScott

Hannah Galvin, MD photo

@hgalvinmd

Matt photo

@Matt_HealthIT

Muhammad Chebli photo

@muhammadc

Scott Voigt photo

@voigtscott

I have a couple codes for getting into the other place, in case anyone else is finding X increasingly unpalatable. (I only use feeds here now, but still....) There's only a little health tech and informatics stuff there so far, but you gotta start somewhere, right?


Matt Pfeffer Reposted

Great thread on Hinton's infamous prediction about AI replacing radiologists: "thinkers have a pattern where they are so divorced from implementation details that applications seem trivial, when in reality, the small details are exactly where value accrues."

I don't talk much about this - I obtained one of the first FDA approvals in ML + radiology and it informs much of how I think about AI systems and their impact on the world. If you're a pure technologist, you should read the following: There's so much to unpack for both why…



I recommend this podcast episode for some thankfully sober assessment of how to use AI in medicine (as a knowledge aid, esp. for less expert MDs, and as an automatic note taker, to start)

Q: What do you call the person who graduated at the bottom of their medical school class? A: A doctor On @CogRev_Podcast, @zakkohane raises the common sense argument of ensuring a higher baseline for doctors everywhere by integrating GPT-4 into clinical medicine.



Matt Pfeffer Reposted

It obviously matters, because it has implications to how well the models can generalize to never-before-seen inputs and tasks. Serious exacerbation of automation bias can occur if we ascribe reasoning to what is just a minor perturbation of training data

This thread is fascinating. LLMs with RLHF are incredibly effective problem solvers we can assign tasks to. From a practical perspective, if most humans can’t tell whether GPT-4 is memorizing vs. reasoning, does the distinction even matter?



Loading...

Something went wrong.


Something went wrong.