One of the most influential scientific and philosophical viewpoints is “More is Different,” introduced in 1972 by Nobel Prize–winning physicist Philip W. Anderson, highlighting the limitations of the reductionist approach. The emergent properties cannot be derived from the fundamental laws that govern their elementary particles. The generalization of this approach suggests a hierarchical structure of science, where explainable properties of small-scale systems cannot necessarily predict the emerging phenomena on larger scales of similar systems. Its interdisciplinary perspective covers chemistry, molecular biology, cell biology, and social sciences besides physics.One of the most influential scientific and philosophical viewpoints is “More is Different,” introduced in 1972 by Nobel Prize–winning physicist Philip W. Anderson, highlighting the limitations of the reductionist approach. The emergent properties cannot be derived from the fundamental laws that govern their elementary particles. The generalization of this approach suggests a hierarchical structure of science, where explainable properties of small-scale systems cannot necessarily predict the emerging phenomena on larger scales of similar systems. Its interdisciplinary perspective covers chemistry, molecular biology, cell biology, and social sciences besides physics.Computer Sciences[#item_full_content]

A research team at Tohoku University and Future University Hakodate has demonstrated that living biological neurons can be trained to perform a supervised temporal pattern learning task previously carried out by artificial systems. By integrating cultured neuronal networks into a machine learning framework, the team showed that these biological systems can generate complex time-series signals, marking a significant step forward in both neuroscience and bio-inspired computing.A research team at Tohoku University and Future University Hakodate has demonstrated that living biological neurons can be trained to perform a supervised temporal pattern learning task previously carried out by artificial systems. By integrating cultured neuronal networks into a machine learning framework, the team showed that these biological systems can generate complex time-series signals, marking a significant step forward in both neuroscience and bio-inspired computing.Hi Tech & Innovation[#item_full_content]

In a recent study published in Nature Communications, researchers created a memristor that uses a built-in oxygen gradient to produce slow, stable conductance changes, enabling a reinforcement learning (RL) algorithm to learn faster and more stably than conventional approaches.In a recent study published in Nature Communications, researchers created a memristor that uses a built-in oxygen gradient to produce slow, stable conductance changes, enabling a reinforcement learning (RL) algorithm to learn faster and more stably than conventional approaches.Hardware[#item_full_content]

When a screenwriter told New York University researchers last year that letting AI do her work would make her “miserable inside,” she was onto something. A follow-up study from NYU’s Tandon School of Engineering and Stern School of Business finds that the instinct to compete with generative AI, rather than simply embrace it, is associated with meaningful long-term benefits for writing professionals.When a screenwriter told New York University researchers last year that letting AI do her work would make her “miserable inside,” she was onto something. A follow-up study from NYU’s Tandon School of Engineering and Stern School of Business finds that the instinct to compete with generative AI, rather than simply embrace it, is associated with meaningful long-term benefits for writing professionals.Machine learning & AI[#item_full_content]

Microsoft to invest $10 bn for Japan AI data centers

Microsoft said Friday it will invest $10 billion in Japan over the next four years to build artificial intelligence data centers and related infrastructure.Microsoft said Friday it will invest $10 billion in Japan over the next four years to build artificial intelligence data centers and related infrastructure.Machine learning & AI[#item_full_content]

Involving people without AI expertise in the development and evaluation of artificial intelligence applications could help create better, fairer, and more trustworthy automated decision-making systems, new research suggests. After enlisting members of the public to evaluate the potential impacts of two real-world applications, researchers from UK universities will present a paper at a major international computing conference which suggests how “participatory AI auditing” could improve AI decision-making in the future.Involving people without AI expertise in the development and evaluation of artificial intelligence applications could help create better, fairer, and more trustworthy automated decision-making systems, new research suggests. After enlisting members of the public to evaluate the potential impacts of two real-world applications, researchers from UK universities will present a paper at a major international computing conference which suggests how “participatory AI auditing” could improve AI decision-making in the future.Machine learning & AI[#item_full_content]

Anthropic CEO Dario Amodei has said that AI could surpass “almost all humans at almost everything” shortly after 2027. While AI’s capabilities are certainly improving, such rapid progress might seem at odds with findings that show AI is still failing at 95%+ of remote freelance projects, and continues to struggle with hallucination, long term planning, and forms of abstract reasoning that humans find easy. But recent work from METR has found evidence that LLMs can gain capabilities in rapid surges—jumping from succeeding almost never to almost always in just a few years. If this is true across the economy, it could mean that workers could be blindsided by AI advances.Anthropic CEO Dario Amodei has said that AI could surpass “almost all humans at almost everything” shortly after 2027. While AI’s capabilities are certainly improving, such rapid progress might seem at odds with findings that show AI is still failing at 95%+ of remote freelance projects, and continues to struggle with hallucination, long term planning, and forms of abstract reasoning that humans find easy. But recent work from METR has found evidence that LLMs can gain capabilities in rapid surges—jumping from succeeding almost never to almost always in just a few years. If this is true across the economy, it could mean that workers could be blindsided by AI advances.Business[#item_full_content]

A team from the Universitat Politècnica de València, part of the Valencian University Research Institute for Artificial Intelligence (VRAIN) and ValgrAI, has participated in the development of ADeLe, a new methodology that offers precise explanations and predictions regarding whether large language models (LLMs) will succeed or fail at specific new tasks they have not yet performed. Furthermore, this methodology identifies exactly the limits of any given model’s reasoning capacity.A team from the Universitat Politècnica de València, part of the Valencian University Research Institute for Artificial Intelligence (VRAIN) and ValgrAI, has participated in the development of ADeLe, a new methodology that offers precise explanations and predictions regarding whether large language models (LLMs) will succeed or fail at specific new tasks they have not yet performed. Furthermore, this methodology identifies exactly the limits of any given model’s reasoning capacity.Machine learning & AI[#item_full_content]

Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.Computer Sciences[#item_full_content]

Fair decisions, clear reasons: Creating fuzzy AI with fairness built in from the start

Although AI is not intentionally biased, it can inherit biases from the data fed into it, learning and repeating them until the system becomes inherently unfair. This is complicated by the problem of identifying where the AI system introduced the bias, as most AI systems display their final decision without showing the steps that made it. Unfair patterns may go unnoticed simply because they are hard to identify.Although AI is not intentionally biased, it can inherit biases from the data fed into it, learning and repeating them until the system becomes inherently unfair. This is complicated by the problem of identifying where the AI system introduced the bias, as most AI systems display their final decision without showing the steps that made it. Unfair patterns may go unnoticed simply because they are hard to identify.Computer Sciences[#item_full_content]

Hirebucket

FREE
VIEW