Recent Summaries

The Download: introducing the AI energy package

4 months agotechnologyreview.com
View Source

This newsletter focuses on the growing energy demands of AI, the increasing sophistication of AI in persuasion, and the introduction of errors by AI in legal settings. It also touches on a variety of other tech-related news items, from legislation on deepfakes to the rise of blended meats.

  • AI Energy Consumption: A deep dive into the significant and growing energy footprint of AI, particularly inference, with investigations into the environmental impact in places like Nevada and Louisiana.

  • AI Persuasion: Research suggesting AI can be more effective at persuading people than humans, raising concerns about its potential for manipulation.

  • AI in Law: The increasing prevalence of AI hallucinations in legal filings, leading to errors and frustration among judges.

  • Legislation and Regulation: New laws addressing deepfakes and tariffs impacting international trade.

  • Acquisitions and Ethical Concerns: The acquisition of 23andMe by Regeneron, raising questions about the security and ethical use of genetic data.

  • The energy required for AI inference (everyday use) is predicted to surpass the energy used for training AI models.

  • AI's ability to personalize arguments makes it a potentially powerful and concerning tool for persuasion.

  • AI-generated errors are increasingly appearing in legal documents, highlighting the need for careful oversight.

  • The "Take It Down Act" criminalizing non-consensual intimate images, including deepfakes, could potentially lead to censorship issues.

  • Autonomous vehicles trained to react like humans resulted in less road injuries in testing scenarios.

The Human Blueprint for Smarter AI Agents

4 months agogradientflow.com
View Source

This newsletter focuses on a more reliable approach to text-to-SQL AI agents, moving beyond simple fine-tuning of LLMs. It highlights Timescale's methodology of mirroring how experienced analysts write SQL, resulting in improved accuracy and efficiency.

  • Human-Inspired AI: The core theme is leveraging human expertise as a blueprint for AI agent design, specifically in the context of generating SQL queries.

  • Structured Knowledge is Key: Emphasizes the importance of incorporating structured knowledge, like semantic catalogs, to ground AI systems accurately.

  • Deterministic Validation: Highlights the value of using existing deterministic tools (databases, compilers) to validate AI output and ensure correctness.

  • Workflow Transformation: Moving beyond simple augmentation, redesigning workflows to integrate AI tightly with deterministic tools and data is crucial for realizing AI's full potential.

  • Simple fine-tuning of LLMs for text-to-SQL is unreliable, requiring more robust approaches for production applications.

  • Timescale's Semantic Catalog and Semantic Validation modules significantly reduce query errors by mirroring expert analysts' SQL writing process.

  • Building and maintaining structured context layers and leveraging deterministic tools as oracles are crucial for trustworthy AI systems.

  • AI's full potential is realized when workflows are transformed, integrating AI tightly with existing tools and structured data sources.

BioTech Company Vows to Transform Drug Discovery with AI

4 months agoaibusiness.com
View Source

Intrepid Labs, a biotech startup utilizing AI and robotics for drug discovery, has emerged from stealth after raising $11 million in funding. Their primary focus is on "Valiant," an AI-enabled robotic lab designed to rapidly analyze drug formulations, aiming to significantly reduce the time and cost associated with traditional drug development.

  • AI-Powered Drug Discovery: Highlights the growing trend of leveraging AI and automation to accelerate and improve the efficiency of drug development processes.

  • Robotics and Automation: Emphasizes the role of robotic labs in high-throughput screening and analysis of drug formulations.

  • Seed Funding: Indicates continued investor interest in early-stage biotech companies focused on AI-driven solutions.

  • Addressing Bottlenecks: Acknowledges the limitations of traditional drug development methods and the need for more efficient approaches.

  • Intrepid Labs aims to transform the industry's approach to drug formulation by using AI and robotics to optimize delivery, dosing, and patient experience from the initial stages.

  • Their AI-enabled platform, Valiant, can potentially reduce the formulation analysis time from months to days, leading to faster and more cost-effective therapies.

  • CEO Christine Allen highlights that traditional drug formulation methods lead to high failure rates during clinical development, a problem Intrepid Labs is directly addressing.

  • The company also develops proprietary oral and long-acting injectable delivery technologies for small molecules and biologics.

AI can do a better job of persuading people than we do

4 months agotechnologyreview.com
View Source
  1. A new study reveals that GPT-4 is significantly more persuasive than humans in online debates, especially when equipped with personal information about its opponent. This raises concerns about the potential for AI-driven disinformation campaigns and the need for strategies to counter these threats.

  2. Key themes and trends:

    • AI Persuasion: LLMs like GPT-4 demonstrate a remarkable ability to persuade individuals, sometimes surpassing human capabilities.
    • Personalized Arguments: Access to personal data significantly enhances AI's persuasive power.
    • Disinformation Risk: The technology could be exploited to spread disinformation and manipulate public opinion.
    • Human-AI Interaction: People may react differently to arguments depending on whether they believe they are interacting with a human or an AI.
    • Counter-Disinformation Potential: LLMs could be used to generate personalized counter-narratives to combat disinformation.
  3. Notable insights and takeaways:

    • GPT-4's persuasion effectiveness increases dramatically when it uses personal information, exceeding human performance by 64% compared to humans without the same data.
    • Humans given personal information about their opponents were less persuasive than those without, suggesting AI better leverages this data.
    • Participants were more likely to agree with arguments if they believed they were debating an AI, highlighting the complex psychology of human-AI interaction.
    • The reasons for increased agreement with AI opponents are unclear and require further research into whether it's due to beliefs about the AI or a consequence of being persuaded.
    • There's an urgent need for more research into how people interact with AI and effective strategies for mitigating the threats posed by AI-driven disinformation.

Beyond Automation: Building Human-Centered HR With AI

4 months agoaibusiness.com
View Source

This newsletter focuses on the crucial balance between technological efficiency and human connection in HR as AI becomes increasingly integrated. It emphasizes the need for a human-centered approach to AI implementation, ensuring fairness, transparency, and ethical considerations are central to the process. The TRUSTED framework offers a roadmap for organizations navigating this evolving landscape.

  • Human-Centered AI: The central theme is ensuring AI in HR complements, not replaces, human judgment and empathy.

  • The TRUSTED Framework: This framework offers guidance on Transparency, Regulation, Usability, Security, Technology, Ethics, and Data, promoting responsible AI adoption.

  • Strategic HR Role: HR is crucial in shaping policies, upskilling employees, and managing the cultural shift brought about by AI.

  • Transparency and Trust: Building trust through open communication about AI's role and limitations is critical for employee acceptance.

  • AI's value in HR lies in complementing human capabilities, not just automating processes, ensuring a focus on employee well-being.

  • Economic pressures should not lead to rushed AI implementations that harm employee trust and ethical considerations. A phased approach is recommended.

  • The TRUSTED framework serves as a practical guide for ethical and effective AI integration in HR, addressing key areas from data security to fairness.

  • HR's strategic role is evolving to include shaping policies, upskilling employees, and managing the cultural impact of AI on the workforce.

ChatGPT Codex: The Missing Manual

4 months agolatent.space
View Source

This Latent Space newsletter discusses the release of ChatGPT Codex, OpenAI's cloud-hosted Autonomous Software Engineer (A-SWE), and provides best practices for its use. It features insights from Josh Ma and Alexander Embiricos of OpenAI, detailing how to leverage Codex for efficient coding workflows and maximizing its potential.

  • Autonomous Software Engineering (A-SWE): Focus on the emerging field of AI agents capable of independent software engineering work, exemplified by ChatGPT Codex.

  • Best Practices for Codex: Emphasis on adopting an abundance mindset, grooming an Agents.md file for instructions, ensuring codebase discoverability, and using the tool in mobile ChatGPT.

  • Human-AI Collaboration: Explores the evolving dynamics between human developers and AI agents, including adapting coding practices to better suit AI assistance.

  • Model Training and Improvement: Highlights OpenAI's approach to model training, focusing on generalization and transfer learning to enhance model capabilities.

  • Environment Customization: Addresses the need for customizing the agent's environment and the ongoing efforts to strike a balance between accessibility, security, and control.

  • Codex represents a significant step towards AI-driven software development, shifting the focus from prompt engineering to creating autonomous agents capable of end-to-end software engineering tasks.

  • The success of Codex relies heavily on adopting new coding practices that prioritize modularity, discoverability, and machine-readable instructions via the Agents.md file.

  • OpenAI is intentionally pushing the frontier of single-shot autonomous software engineering, while also recognizing the value of human-in-the-loop approaches.

  • The compute platform is designed with security in mind, initially limiting network access but with plans to evolve towards more flexible configurations.

  • OpenAI is actively seeking feedback from the community to improve Codex, particularly regarding environment customization and identifying workflows where it can provide the most value.