I agree it's hard to distinguish fraud from mistakes or sloppiness. That's why I think the lab has to be locked down and a forensic team needs to be sent in to try to determine what actually happened. This has the secondary effect of incentivizing greater rigor, which is desperately needed since nobody wants their lab to be shut down for…
I agree it's hard to distinguish fraud from mistakes or sloppiness. That's why I think the lab has to be locked down and a forensic team needs to be sent in to try to determine what actually happened. This has the secondary effect of incentivizing greater rigor, which is desperately needed since nobody wants their lab to be shut down for a few days for forensic teams to analyze everything. Anyone who reports fraud that is later verified should get a monetary reward.
I know this all sounds incredibly harsh and draconian but I think these are the measures we need, or at least we need to try for some time and see how it goes. I suspect a lot of "sloppiness" is actually deliberate fraud or fudging of data. The only way to figure out how much that is the case is to do forsenic investigations, I think.
I personally have failed to replicate a former lab-member's published results and could find no evidence on any computers that they actually trained the AI models they said they trained. It was pretty clearly fraud. So this entire matter hits home for me. The episode was frustrating for me, not only because I wasted time but because the person in question also did several other unethical things such as lying on their CV and website, and went on to receive a reward for the fraudulent work and become an assistant professor. I have spoken with at least two other people who have had similar experiences - they failed to replicate a former lab member's work and concluded it was deliberate fraud.
I agree it's hard to distinguish fraud from mistakes or sloppiness. That's why I think the lab has to be locked down and a forensic team needs to be sent in to try to determine what actually happened. This has the secondary effect of incentivizing greater rigor, which is desperately needed since nobody wants their lab to be shut down for a few days for forensic teams to analyze everything. Anyone who reports fraud that is later verified should get a monetary reward.
I know this all sounds incredibly harsh and draconian but I think these are the measures we need, or at least we need to try for some time and see how it goes. I suspect a lot of "sloppiness" is actually deliberate fraud or fudging of data. The only way to figure out how much that is the case is to do forsenic investigations, I think.
I personally have failed to replicate a former lab-member's published results and could find no evidence on any computers that they actually trained the AI models they said they trained. It was pretty clearly fraud. So this entire matter hits home for me. The episode was frustrating for me, not only because I wasted time but because the person in question also did several other unethical things such as lying on their CV and website, and went on to receive a reward for the fraudulent work and become an assistant professor. I have spoken with at least two other people who have had similar experiences - they failed to replicate a former lab member's work and concluded it was deliberate fraud.