The N&G Science Discussion Thread


What fault line are they talking about here? What is the Seattle Fault? I have never heard of it before and it sounds like it's a completely different fault than Cascadia, which also could hit the Pacific Northwest with tsunamis.
 

What fault line are they talking about here? What is the Seattle Fault? I have never heard of it before and it sounds like it's a completely different fault than Cascadia, which also could hit the Pacific Northwest with tsunamis.
Looks like the Seattle fault is a smaller part of the puget Sound Fault System ( Puget Sound faults - Wikipedia ) of shallower and more local surface near fault, while the Cascadia is the much larger and deeper faulting along the coast
 

Interesting. I've always thought that the Earth's rotation is very slowly slowing down as time goes on. And while that still seems to be the long term trend, it's still possible for the Earth's rotation to speed up. And it apparently has just done that and might be trending that way for the next 50 years.

Interesting read above.
 
Anyone following the Clorox Pine-Sol recall? It's wild.

30 million units of Pine-Sol are being recalled because they may contain a bacteria that is harmful to humans if ingested.

This bacteria developed a resistance to Pine-Sol and other antibacterial agents. This particular bacterial strain is part of the 0.01% of bacteria that Pine-Sol does not kill because the bacteria adapted to be resistant.

Also, who ingests cleaning products. Are the recalling it over concern that someone might, or the fact that you could contaminate surfaces you thought you were cleaning?
 
Anyone following the Clorox Pine-Sol recall? It's wild.

30 million units of Pine-Sol are being recalled because they may contain a bacteria that is harmful to humans if ingested.

This bacteria developed a resistance to Pine-Sol and other antibacterial agents. This particular bacterial strain is part of the 0.01% of bacteria that Pine-Sol does not kill because the bacteria adapted to be resistant.

Also, who ingests cleaning products. Are the recalling it over concern that someone might, or the fact that you could contaminate surfaces you thought you were cleaning?
 
I'm really worried there is too much light pollution around my area even then.

Prob would have to go to my mothers in the country to attempt to see it. Which I can't swig tonight or tomorrow night.
 

Oh, this is interesting. Would love to read a scientific journal entry about this to learn more about it.
Here is a little more on that, also with links to the original scientific paper
 
329307464_919478962746157_1155620681854124755_n-jpg.79613


I'm now going to have to do some calculations to see if that is in fact, correct.
 

A doctor can’t tell if somebody is Black, Asian, or white, just by looking at their X-rays. But a computer can, according to a surprising new paper by an international team of scientists, including researchers at the Massachusetts Institute of Technology and Harvard Medical School.

The study found that an artificial intelligence program trained to read X-rays and CT scans could predict a person’s race with 90 percent accuracy. But the scientists who conducted the study say they have no idea how the computer figures it out.

A doctor can’t tell if somebody is Black, Asian, or white, just by looking at their X-rays. But a computer can, according to a surprising new paper by an international team of scientists, including researchers at the Massachusetts Institute of Technology and Harvard Medical School.

The study found that an artificial intelligence program trained to read X-rays and CT scans could predict a person’s race with 90 percent accuracy. But the scientists who conducted the study say they have no idea how the computer figures it out.

“When my graduate students showed me some of the results that were in this paper, I actually thought it must be a mistake,” said Marzyeh Ghassemi, an MIT assistant professor of electrical engineering and computer science, and coauthor of the paper, which was published Wednesday in the medical journal The Lancet Digital Health. “I honestly thought my students were crazy when they told me.”

At a time when AI software is increasingly used to help doctors make diagnostic decisions, the research raises the unsettling prospect that AI-based diagnostic systems could unintentionally generate racially biased results. For example, an AI (with access to X-rays) could automatically recommend a particular course of treatment for all Black patients, whether or not it’s best for a specific person. Meanwhile, the patient’s human physician wouldn’t know that the AI based its diagnosis on racial data.

The research effort was born when the scientists noticed that an AI program for examining chest X-rays was more likely to miss signs of illness in Black patients. “We asked ourselves, how can that be if computers cannot tell the race of a person?” said Leo Anthony Celi, another coauthor and an associate professor at Harvard Medical School.

The research team, which included scientists from the United States, Canada, Australia, and Taiwan, first trained an AI system using standard data sets of X-rays and CT scans, where each image was labeled with the person’s race. The images came from different parts of the body, including the chest, hand, and spine. The diagnostic images examined by the computer contained no obvious markers of race, like skin color or hair texture.

Once the software had been shown large numbers of race-labeled images, it was then shown different sets of unlabeled images. The program was able to identify the race of people in the images with remarkable accuracy, often well above 90 percent. Even when images from people of the same size or age or gender were analyzed, the AI accurately distinguished between Black and white patients.

But how? Ghassemi and her colleagues remain baffled...
 

A doctor can’t tell if somebody is Black, Asian, or white, just by looking at their X-rays. But a computer can, according to a surprising new paper by an international team of scientists, including researchers at the Massachusetts Institute of Technology and Harvard Medical School.

The study found that an artificial intelligence program trained to read X-rays and CT scans could predict a person’s race with 90 percent accuracy. But the scientists who conducted the study say they have no idea how the computer figures it out.

A doctor can’t tell if somebody is Black, Asian, or white, just by looking at their X-rays. But a computer can, according to a surprising new paper by an international team of scientists, including researchers at the Massachusetts Institute of Technology and Harvard Medical School.

The study found that an artificial intelligence program trained to read X-rays and CT scans could predict a person’s race with 90 percent accuracy. But the scientists who conducted the study say they have no idea how the computer figures it out.

“When my graduate students showed me some of the results that were in this paper, I actually thought it must be a mistake,” said Marzyeh Ghassemi, an MIT assistant professor of electrical engineering and computer science, and coauthor of the paper, which was published Wednesday in the medical journal The Lancet Digital Health. “I honestly thought my students were crazy when they told me.”

At a time when AI software is increasingly used to help doctors make diagnostic decisions, the research raises the unsettling prospect that AI-based diagnostic systems could unintentionally generate racially biased results. For example, an AI (with access to X-rays) could automatically recommend a particular course of treatment for all Black patients, whether or not it’s best for a specific person. Meanwhile, the patient’s human physician wouldn’t know that the AI based its diagnosis on racial data.

The research effort was born when the scientists noticed that an AI program for examining chest X-rays was more likely to miss signs of illness in Black patients. “We asked ourselves, how can that be if computers cannot tell the race of a person?” said Leo Anthony Celi, another coauthor and an associate professor at Harvard Medical School.

The research team, which included scientists from the United States, Canada, Australia, and Taiwan, first trained an AI system using standard data sets of X-rays and CT scans, where each image was labeled with the person’s race. The images came from different parts of the body, including the chest, hand, and spine. The diagnostic images examined by the computer contained no obvious markers of race, like skin color or hair texture.

Once the software had been shown large numbers of race-labeled images, it was then shown different sets of unlabeled images. The program was able to identify the race of people in the images with remarkable accuracy, often well above 90 percent. Even when images from people of the same size or age or gender were analyzed, the AI accurately distinguished between Black and white patients.

But how? Ghassemi and her colleagues remain baffled...
We created an algorithm and we don’t totally understand how it works but we are going to use it anyway and also, have faith that the thing is accurate for the right reasons, and not as a total fluke that gives biased results.
 
We created an algorithm and we don’t totally understand how it works but we are going to use it anyway and also, have faith that the thing is accurate for the right reasons, and not as a total fluke that gives biased results.

Yeah, in the past AIs have come up with algorithms that worked on the training sample and failed in the wild. One, for example, learned to tell wolves from dogs, but it turned out it just looked at the background, because in the training set the dogs were on grass and the wolves in snow. Another had to do with identifying problematic lung scans, and the machine learned that CT scans meant big trouble and X-rays not so much, which didn't actually help the humans that were already choosing to do CT scans only when things seemed serious.
 
We created an algorithm and we don’t totally understand how it works but we are going to use it anyway and also, have faith that the thing is accurate for the right reasons, and not as a total fluke that gives biased results.

Yeah, in the past AIs have come up with algorithms that worked on the training sample and failed in the wild. One, for example, learned to tell wolves from dogs, but it turned out it just looked at the background, because in the training set the dogs were on grass and the wolves in snow. Another had to do with identifying problematic lung scans, and the machine learned that CT scans meant big trouble and X-rays not so much, which didn't actually help the humans that were already choosing to do CT scans only when things seemed serious.
I saw a talk by some ai researcher a couple of years ago who analyzed what info the ai in image recognition used and showed heat maps of what part of the images was used. I remember his main message that such heat maps are relatively easy to produce and people should more look at what their ais are actually doing and not just treat them as black boxes. He had a couple of interesting examples where you could really see some patterns that the ai used that humans would usually not use.
But he also had one nice example of the recocnition of horses where the heat map showed that all the ai recognized was the word horse i the picture caption because they used pictures with captions for training data
 
Back
Top