Showing 20 articles starting at article 1
Categories: Mathematics: Modeling
Published Self-improving AI method increases 3D-printing efficiency (via sciencedaily.com) Original source
An artificial intelligence algorithm can allow researchers to more efficiently use 3D printing to manufacture intricate structures. The development could allow for more seamless use of 3D printing for complex designs in everything from artificial organs to flexible electronics and wearable biosensors. As part of the study, the algorithm learned to identify, and then print, the best versions of kidney and prostate organ models, printing out 60 continually improving versions.
Published Peering into the mind of artificial intelligence to make better antibiotics (via sciencedaily.com) Original source
Artificial intelligence (AI) has exploded in popularity as of late. But just like a human, it's hard to read an AI model's mind. Explainable AI (XAI) could help us do just that by providing justification for a model's decisions. And now, researchers are using XAI to scrutinize predictive AI models more closely, which could help make better antibiotics.
Published AI model aids early detection of autism (via sciencedaily.com) Original source
A new machine learning model can predict autism in young children from relatively limited information. The model can facilitate early detection of autism, which is important to provide the right support.
Published Why do researchers often prefer safe over risky projects? Explaining risk aversion in science (via sciencedaily.com) Original source
A mathematical framework that builds on the economic theory of hidden-action models provides insight into how the unobservable nature of effort and risk shapes investigators' research strategies and the incentive structures within which they work, according to a new study.
Published In subdivided communities cooperative norms evolve more easily (via sciencedaily.com) Original source
Researchers simulated social norms with a supercomputer. Their findings contribute to a deeper understanding of the evolution of social norms and their role in fostering cooperative behavior.
Published Leading AI models struggle to identify genetic conditions from patient-written descriptions (via sciencedaily.com) Original source
Researchers discover that while artificial intelligence (AI) tools can make accurate diagnoses from textbook-like descriptions of genetic diseases, the tools are significantly less accurate when analyzing summaries written by patients about their own health. These findings demonstrate the need to improve these AI tools before they can be applied in health care settings to help make diagnoses and answer patient questions.
Published Think fast -- or not: Mathematics behind decision making (via sciencedaily.com) Original source
New research explains the mathematics behind how initial predispositions and additional information affect decision making.
Published AI poses no existential threat to humanity, new study finds (via sciencedaily.com) Original source
Large Language Models (LLMs) are entirely controllable through human prompts and lack 'emergent abilities'; that is, the means to form their own insights or conclusions. Increasing model size does not lead LLMs to gain emergent reasoning abilities, meaning they will not develop hazardous abilities and therefore do not pose an existential threat. A new study sheds light on the (until now unexplained) capabilities and shortcomings of LLMs, including the need for carefully engineered prompts to exhibit good performance.
Published Researchers develop AI model that predicts the accuracy of protein--DNA binding (via sciencedaily.com) Original source
A new artificial intelligence model can predict how different proteins may bind to DNA.
Published Researchers outline promises, challenges of understanding AI for biological discovery (via sciencedaily.com) Original source
Machine learning is a powerful tool in computational biology, enabling the analysis of a wide range of biomedical data such as genomic sequences and biological imaging. But when researchers use machine learning in computational biology, understanding model behavior remains crucial for uncovering the underlying biological mechanisms in health and disease. Researchers now propose guidelines that outline pitfalls and opportunities for using interpretable machine learning methods to tackle computational biology problems.
Published A new way of thinking about the economy could help protect the Amazon, and help its people thrive (via sciencedaily.com) Original source
To protect the Amazon and support the wellbeing of its people, its economy needs to shift from environmentally harmful production to a model built around the diversity of indigenous and rural communities, and standing forests.
Published Cracking the code of life: new AI model learns DNA's hidden language (via sciencedaily.com) Original source
With GROVER, a new large language model trained on human DNA, researchers could now attempt to decode the complex information hidden in our genome. GROVER treats human DNA as a text, learning its rules and context to draw functional information about the DNA sequences.
Published Method prevents an AI model from being overconfident about wrong answers (via sciencedaily.com) Original source
Thermometer, a new calibration technique tailored for large language models, can prevent LLMs from being overconfident or underconfident about their predictions. The technique aims to help users know when a model should be trusted.
Published Demographics of north African human populations unravelled using genomic data and artificial intelligence (via sciencedaily.com) Original source
A new study places the origin of the Imazighen in the Epipaleolithic, more than twenty thousand years ago. The research concludes that the genetic origin of the current Arab population of north Africa is far more recent than previously believed, placing it in the seventh century AD. The team has designed an innovative demographic model that uses artificial intelligence to analyze the complete genomes of the two populations.
Published Researchers explore the potential of clean energy markets as a hedging tool (via sciencedaily.com) Original source
Clean energy investments offer potential stability and growth, especially during volatile market conditions. A recent study explored the relationship between clean energy markets and global stock markets. Significant spillovers were observed from major indices like the SP500 to markets such as Japan's Nikkei225 and Global Clean Energy Index. These interactions suggest opportunities for optimizing investment portfolios and leveraging clean energy assets as hedging tools in volatile market environments.
Published Breaking MAD: Generative AI could break the internet, researchers find (via sciencedaily.com) Original source
Researchers have found that training successive generations of generative artificial intelligence models on synthetic data gives rise to self-consuming feedback loops.
Published When allocating scarce resources with AI, randomization can improve fairness (via sciencedaily.com) Original source
Researchers argue that, in some situations where machine-learning models are used to allocate scarce resources or opportunities, randomizing decisions in a structured way may lead to fairer outcomes.
Published Raindrops grow with turbulence in clouds (via sciencedaily.com) Original source
Tackling a long-time mystery, scientists have found that the turbulent movements of air in clouds play a key role in the growth of water droplets and the initiation of rain. The research can improve computer model simulations of weather and climate and ultimately lead to better forecasts.
Published Can a computer tell patients how their multiple sclerosis will progress? (via sciencedaily.com) Original source
Machine learning models can reliably inform clinicians about the disability progression of multiple sclerosis, according to a new study published this week in the open-access journal PLOS Digital Health by Edward De Brouwer of KU Leuven, Belgium, and colleagues.
Published Large language models don't behave like people, even though we may expect them to (via sciencedaily.com) Original source
People generalize to form beliefs about a large language model's performance based on what they've seen from past interactions. When an LLM is misaligned with a person's beliefs, even an extremely capable model may fail unexpectedly when deployed in a real-world situation.