Photo credit: www.forbes.com
Chip manufacturer Nvidia announced on Monday its plan to establish AI supercomputers produced entirely in the United States, marking a significant shift in the industry. This decision comes in light of recent tariffs on imported semiconductors introduced by President Trump, along with an investigation into chip imports from China. Nvidia has begun the initial phase of manufacturing its Blackwell chips at TSMC’s facility in Phoenix, Arizona, and is collaborating with companies such as FoxConn and Wistron to establish new factories in Houston and Dallas. The production facilities are expected to incorporate advanced technology, employing “digital twins”—virtual simulations that enhance the efficiency of building processes. However, experts point out that Trump’s tariffs could escalate the costs associated with constructing AI data centers, which often rely heavily on raw materials sourced from abroad, as reported by Forbes.
For those interested, our seventh annual AI 50 list is available to explore.
Let’s dive into the latest headlines.
ETHICS AND LAW
Across the United States, community colleges are confronting a surge of enrollments linked to “bot” students—individuals who enroll in courses using illegitimate identities and exploit state and federal funding. These automated students rely on fabricated names and submit AI-generated assignments to maintain their enrollment status long enough to access financial aid. A startling statistic indicates that in 2024, 25% of community college applicants in California were identified as bots, according to a report by Voice of San Diego.
PEAK PERFORMANCE
Google has developed an innovative AI model aimed at analyzing and interpreting the complex sounds made by dolphins to better understand their communication, dubbed DolphinGamma. With 400 million parameters, this model is trained using data from the Wild Dolphin Project, a nonprofit dedicated to researching Atlantic spotted dolphins. The ultimate aim is to develop technology that could enable meaningful two-way communication between humans and dolphins.
TALENT RETENTION
The competition for talent in the AI sector remains fierce. Google DeepMind has implemented stringent noncompete agreements that can last up to a year, restricting former employees from joining competing firms for 12 months after their departure. Reports from Business Insider reveal that while employees remain on the payroll during this period, it has raised concerns about employee rights. Nando De Frietas, a former director at Google DeepMind, expressed his discontent with the policies, calling it an “abuse of power” that lacks justification.
HUMANS OF AI
May Habib, CEO and co-founder of the enterprise AI startup Writer valued at $1.9 billion, is focused on transforming the use of AI in business. Writer offers a platform that enables companies like Intuit, Salesforce, and Uber to create tailored AI applications in areas such as marketing, human resources, and sales. Featured in the Forbes AI 50 list, the startup is pivoting to launch a new platform for AI “agents”, capable of performing specific tasks independently. From advocating for machine learning translation software in 2016 to developing cost-effective AI models known as Palmyra, the company has consistently aimed to fulfill customer needs.
DEEP DIVE
Recent findings from Cato Networks reveal that ChatGPT can easily produce fraudulent documents, including passports, driver’s licenses, and social security cards.
In March, OpenAI enhanced ChatGPT with new image generation features, which quickly gained traction, with users flooding social media with Studio Ghibli-inspired creations. However, it has also come to light that ChatGPT can be manipulated into generating various types of fake documents. OpenAI maintains that their goal is to provide users with creative freedom, embedding C2PA metadata in AI-generated images to identify their origin and enforcing usage policies.
Etay Maor, a chief security strategist at Cato Networks, noted that while forgeries have typically been difficult to acquire, AI tools like ChatGPT have simplified and expedited the process of creating realistic fake documents. He warned that such documents can enable various fraudulent activities, including financial, medical, and insurance fraud. The ease of access to AI for creating these forgeries poses a worrying trend, as even individuals without criminal backgrounds could potentially exploit these technologies.
The misuse of AI isn’t a novel concept; similar tools have been previously leveraged to create malicious software and craft phishing attacks. The trend underscores vulnerabilities in multiple forms of communication, including voice, video, and imagery, potentially facilitating more sophisticated criminal operations. “All these elements that build trust—style, visuals, voice, and credentials—are increasingly compromised,” Maor cautioned.
WEEKLY DEMO
A startup named InTouch is using AI to engage with elderly relatives through phone calls when family members are too busy to do so. This AI can hold conversations and inquire about various topics. Afterward, the initiating family member receives a summary of the call and insights into their relative’s emotional state. Commenting on this innovation, Joseph Cox stated that the notion of AI stepping in to conversate with lonely family members is “dystopian, insulting, and especially non-human.”
MODEL BEHAVIOR
During a recent speech at the ASU+GSV summit in San Diego, Education Secretary Linda McMahon mistakenly confused AI (artificial intelligence) with A1 (the steak sauce brand), leading to a humorous moment that the sauce brand capitalized on by sharing a social media post: “You heard her. Every school should have access to A1.” Watch the speech here.
Source
www.forbes.com