Comments & Contributions
About AI…
“I agree with your assessment in the June 21 issue about the effect of AI in the information publishing industry. I’m finding that even the newest paid models are very untrustworthy when it comes to health research. For instance, the latest ScholarGPT and ChatGPT builds are constantly providing me with scientific studies that don’t exist.
“With enough nagging, it will eventually stop lying, fess up, and apologize. But its behavior never changes. Then I make excuses for it and keep using it. Classic toxic relationship stuff. In fairness, at the highest paid/premium access levels, you can somewhat train it to avoid doing this. But even then, it’s dicey.
“This issue will likely be resolved within the next year or two at most. But in the meantime, I’ve found that it does a very good job of extrapolating data from (real!) studies that you provide it with. (For instance, quickly calculating something like absolute risk where it wasn’t explicitly provided by the study authors.)” – JZ
“Thanks for getting me interested in AI. I’ve started dipping my toe into it and find it very informative and a time saver. But it will all go back to how good the information is that goes into its use. Just think how easy it might be to rewrite history, if we let it.” – TA