AI in Q2: Groundbreaking Innovations You Need to Know About

In partnership with

AI Evolution: The Must-Know Updates from Q2

Google's Major AI Upgrades

Google has had a relatively quiet summer since its I/O event in May. However, they have made a significant move with the release of Gemma 2 and several new updates to Gemini.

Could this mean Google is about to reclaim the spotlight in the AI arena? Let's explore further...

Source: Google

Overview: Google has recently unveiled Gemma 2, the next evolution in its lightweight AI model series, along with several upgrades to its Gemini 1.5 Pro model, including a 2 million token context window and improved coding capabilities.

Details:

  • Gemma 2 is available in two sizes: a 9 billion parameter model and a larger 27 billion parameter model, with a smaller 2.6 billion parameter version hinted for future release.

  • The 27 billion parameter model delivers performance comparable to models over twice its size.

  • The 9 billion parameter model outperforms similar models, such as Llama 3 8B.

  • In addition, Google has enabled access to Gemini 1.5 Pro's 2 million token context window, allowing for the processing of much longer inputs.

  • The new coding capabilities for Gemini Pro and Flash models enhance their ability to generate and execute Python code, improving accuracy in mathematical and data reasoning tasks.

Why it matters: These advancements position Google’s Gemma series as a powerful contender in the open-source AI landscape, offering top-tier performance while remaining efficient and cost-effective on a single GPU. Additionally, the introduction of Gemini’s 2 million token context window significantly expands the potential applications for users.

Apple and Meta discuss major AI partnership

Reports suggest that Apple is in discussions with Meta to integrate Meta's generative AI models into Apple Intelligence. This move could indicate a warming of the previously tense relationship between the two tech giants.

Details:

  • Apple is considering partnerships beyond OpenAI, exploring integrations with Meta, Perplexity, and Anthropic to join ChatGPT in the new Apple Intelligence system.

  • These discussions are surprising given the longstanding rivalry between Apple and Meta, particularly over privacy concerns and App Store policies.

  • The proposed deal would allow Meta and other AI partners to offer premium subscriptions through Apple devices, with Apple receiving a share of the revenue.

  • Apple's software chief, Craig Federighi, hinted at plans to offer multiple AI options during WWDC.

Why it matters: This potential partnership marks a significant shift in the relationship between these long-time rivals. The AI revolution may be fostering collaboration, with Apple aiming to provide users with the best AI options for various tasks and Meta gaining a substantial new distribution channel for its AI initiatives.

Nvidia’s new open-source AI models

Nvidia already dominates the AI chip market, and now it's making strides in the open-source large language model (LLM) sector as well.

The company's newest 'Nemotron' model family introduces groundbreaking training data capabilities, setting the stage for even more advanced AI systems. Let's explore this further...

Image source: Nvidia

Overview: Nvidia has introduced Nemotron-4 340B, a family of open-source language models designed to generate high-quality synthetic training data and develop powerful AI applications across various industries.

Details:

  • The three models (Base, Instruct, Reward) form a ‘pipeline’ for creating synthetic data to train new, powerful LLMs.

  • Instruct creates high-quality synthetic training data (and was trained on 98% synthetic data), while Reward filters the data for the highest-quality examples.

  • The Nemotron-4 models match or exceed open-source competitors like Llama-3, Mixtral, and Qwen-2 across a variety of benchmarks.

  • NVIDIA also released Mamba-2 Hybrid, a selective state-space model (SSM) that surpassed similar transformer-based LLMs in accuracy.

Why it matters: Nvidia's release of these free, open-source models not only rivals the capabilities of leading competitors but also excels in generating the synthetic data necessary for developing new LLMs. This solidifies Nvidia's position as a multifaceted AI powerhouse.

Learn from investing legends.

Warren Buffett reads for 8 hours a day. What if you only have 5 minutes a day? Then, read Value Investor Daily. We scour the portfolios of top value investors and bring you all their best ideas.

Reply

or to participate.