Dtknuckles
Well-Known Member
Never been dumped, huh?
Well aren’t you feeling poetic today? (This hit too close to home after coming from my counseling session today)
Never been dumped, huh?
Looks like the Seattle fault is a smaller part of the puget Sound Fault System ( Puget Sound faults - Wikipedia ) of shallower and more local surface near fault, while the Cascadia is the much larger and deeper faulting along the coastSimulation shows tsunami waves as high as 42 feet could hit Seattle in minutes should a major earthquake occur on the Seattle Fault | CNN
A simulation released by the Washington State Department of Natural Resources (DNR) shows the impact of a 7.5-magnitude earthquake on the Seattle Fault.www.cnn.com
What fault line are they talking about here? What is the Seattle Fault? I have never heard of it before and it sounds like it's a completely different fault than Cascadia, which also could hit the Pacific Northwest with tsunamis.
Anyone following the Clorox Pine-Sol recall? It's wild.
30 million units of Pine-Sol are being recalled because they may contain a bacteria that is harmful to humans if ingested.
This bacteria developed a resistance to Pine-Sol and other antibacterial agents. This particular bacterial strain is part of the 0.01% of bacteria that Pine-Sol does not kill because the bacteria adapted to be resistant.
Also, who ingests cleaning products. Are the recalling it over concern that someone might, or the fact that you could contaminate surfaces you thought you were cleaning?
Aw man, which I knew to get up and look for this Commit last night.
Here is a little more on that, also with links to the original scientific paperEarth's inner core may have stopped turning and could go into reverse, study suggests | CNN
The rotation of Earth's inner core may have paused and it could even go into reverse, new research suggests.www.cnn.com
Oh, this is interesting. Would love to read a scientific journal entry about this to learn more about it.
the lack of copyediting in this thing is giving me a migraine.
I'm now going to have to do some calculations to see if that is in fact, correct.
We created an algorithm and we don’t totally understand how it works but we are going to use it anyway and also, have faith that the thing is accurate for the right reasons, and not as a total fluke that gives biased results.MIT, Harvard scientists find AI can recognize race from X-rays — and nobody knows how - The Boston Globe
As artificial intelligence is increasingly used to help make diagnostic decisions, the research raises the unsettling prospect that AI-based health systems could generate racially biased results.www.bostonglobe.com
A doctor can’t tell if somebody is Black, Asian, or white, just by looking at their X-rays. But a computer can, according to a surprising new paper by an international team of scientists, including researchers at the Massachusetts Institute of Technology and Harvard Medical School.
The study found that an artificial intelligence program trained to read X-rays and CT scans could predict a person’s race with 90 percent accuracy. But the scientists who conducted the study say they have no idea how the computer figures it out.
A doctor can’t tell if somebody is Black, Asian, or white, just by looking at their X-rays. But a computer can, according to a surprising new paper by an international team of scientists, including researchers at the Massachusetts Institute of Technology and Harvard Medical School.
The study found that an artificial intelligence program trained to read X-rays and CT scans could predict a person’s race with 90 percent accuracy. But the scientists who conducted the study say they have no idea how the computer figures it out.
“When my graduate students showed me some of the results that were in this paper, I actually thought it must be a mistake,” said Marzyeh Ghassemi, an MIT assistant professor of electrical engineering and computer science, and coauthor of the paper, which was published Wednesday in the medical journal The Lancet Digital Health. “I honestly thought my students were crazy when they told me.”
At a time when AI software is increasingly used to help doctors make diagnostic decisions, the research raises the unsettling prospect that AI-based diagnostic systems could unintentionally generate racially biased results. For example, an AI (with access to X-rays) could automatically recommend a particular course of treatment for all Black patients, whether or not it’s best for a specific person. Meanwhile, the patient’s human physician wouldn’t know that the AI based its diagnosis on racial data.
The research effort was born when the scientists noticed that an AI program for examining chest X-rays was more likely to miss signs of illness in Black patients. “We asked ourselves, how can that be if computers cannot tell the race of a person?” said Leo Anthony Celi, another coauthor and an associate professor at Harvard Medical School.
The research team, which included scientists from the United States, Canada, Australia, and Taiwan, first trained an AI system using standard data sets of X-rays and CT scans, where each image was labeled with the person’s race. The images came from different parts of the body, including the chest, hand, and spine. The diagnostic images examined by the computer contained no obvious markers of race, like skin color or hair texture.
Once the software had been shown large numbers of race-labeled images, it was then shown different sets of unlabeled images. The program was able to identify the race of people in the images with remarkable accuracy, often well above 90 percent. Even when images from people of the same size or age or gender were analyzed, the AI accurately distinguished between Black and white patients.
But how? Ghassemi and her colleagues remain baffled...
We created an algorithm and we don’t totally understand how it works but we are going to use it anyway and also, have faith that the thing is accurate for the right reasons, and not as a total fluke that gives biased results.
We created an algorithm and we don’t totally understand how it works but we are going to use it anyway and also, have faith that the thing is accurate for the right reasons, and not as a total fluke that gives biased results.
I saw a talk by some ai researcher a couple of years ago who analyzed what info the ai in image recognition used and showed heat maps of what part of the images was used. I remember his main message that such heat maps are relatively easy to produce and people should more look at what their ais are actually doing and not just treat them as black boxes. He had a couple of interesting examples where you could really see some patterns that the ai used that humans would usually not use.Yeah, in the past AIs have come up with algorithms that worked on the training sample and failed in the wild. One, for example, learned to tell wolves from dogs, but it turned out it just looked at the background, because in the training set the dogs were on grass and the wolves in snow. Another had to do with identifying problematic lung scans, and the machine learned that CT scans meant big trouble and X-rays not so much, which didn't actually help the humans that were already choosing to do CT scans only when things seemed serious.