Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's nonsense. I can spend the whole day creating false papers on AI, then feeding it back to another AI to check its "quality". Is this making the paper to be "remembered" by AI? If yes, then we have deeper problems and we shouldn't be using AI to do anything related to science.





The key option in ChatGPT is under Data controls.

"Improve the model for everyone - Allow your content to be used to train our models, which makes ChatGPT better for you and everyone who uses it."

It's this option that gives people pause.


not that fact that a 4 year old on LSD is deciding what qualifies as good science?

I think he means WRT the leaking issue that we were discussing.

If someone is just, like, working chatGPT up to automatically review papers, or using Grok to automatically review grants with minimal human intervention, that’d obviously be a totally nuts thing to do. But who would do such a thing, right?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: