Impakti.com
EN|SQ
HomeEconomyTechnologyPoliticsCultureWeatherScienceLifestyleOp-EdNationalWorld

Download the app

Download on the App StoreGet it on Google Play
Impakti.com

Impakti is a bilingual digital news platform delivering news and analysis in English and Albanian.

  • Radio Impakti
  • Weather
  • Crypto
  • Energy
  • Terms of Use
  • Advertise
  • About Us
  • Privacy Policy

Find us on:

media@impakti.com

Agim Ramadani St.

10000 Pristina

© 2026 Impakti. All rights reserved.

Technology

Monday, March 2, 2026

Innovation · Trends · Analysis

Elon Musk has reopened the lawsuit against OpenAI, accusing the company of abandoning its mission.
Technology

Elon Musk has reopened the lawsuit against OpenAI, accusing the company of abandoning its mission.

Elon Musk has withdrawn and then reopened a lawsuit against OpenAI, accusing the company of abandoning its non-profit mission. The renewed lawsuit names new defendants, including Microsoft, LinkedIn co-founder Reid Hoffman, and Dee Templeton, a former OpenAI board member and Microsoft vice president. The lawsuit also includes several new plaintiffs, including Shivon Zilis, a Neuralink executive and former OpenAI board member, and Musk’s company. Musk’s lawyers allege that OpenAI is trying to eliminate competitors by “asking investors not to fund them.” According to the lawsuit, OpenAI is unfairly taking advantage of Microsoft’s infrastructure and expertise in a deal that is considered a “de facto merger.” One of the charges involves Hoffman, who served on the boards of Microsoft and OpenAI, as well as being a partner at investment firm Greylock, with insider knowledge of the companies’ deals. Hoffman is believed to have had ties to Inflection, an AI startup that Microsoft acquired and which could be considered a competitor to OpenAI. The lawsuit also names Templeton, accusing her of her role in facilitating anticompetitive agreements between Microsoft and OpenAI. Musk and the other plaintiffs allege that these agreements violate antitrust laws.
Ad space
Technology

Meta makes its Llama models available for national security applications

To combat the perception that its artificial intelligence is aiding foreign adversaries, Meta said today that it is making its “Llama” AI model series available to U.S. Government agencies and its national security contractors. “We are pleased to confirm that we are making Llama available to U.S. Government agencies, including those working on defense and national security applications, and to private sector partners who support their work,” Meta wrote in a blog post. “We’re partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake to bring Llama to government agencies. Oracle is using Llama to process aircraft maintenance documents. Scale AI is adapting Llama to support specific missions of national security teams. And Lockheed Martin is offering Llama to its defense customers for generating computer code. Meta’s policy normally prohibits developers from using Llama for any project related to military, combat or espionage missions. But the company is making an exception in this case, as well as exceptions for similar government agencies in the United Kingdom, Canada, Australia and New Zealand,” she told Bloomberg.

In this section

The AI Revolution is Facing Consumer Backlash Over Privacy
Technology

The AI Revolution is Facing Consumer Backlash Over Privacy

AI Revolution Faces Consumer Backlash Over Privacy As artificial intelligence (AI) is rapidly integrated into our everyday devices, consumers are increasingly in a complex relationship with digital giants like Google, Meta, Microsoft, and Apple. These companies are introducing AI-powered features for everything from composing emails to editing photos, touting them as essential tools for modern life. But consumer reaction has been mixed, with many wondering who asked for this AI invasion—and whether it’s here to help or hinder. The AI boom is evident on several platforms. On Google, a query now often produces an AI-generated summary before traditional search results. On Meta’s Instagram, a simple search can trigger a chatbot interaction with Meta AI. This month, Apple is introducing its own AI suite, Apple Intelligence, through software updates that bring AI to features like photo and text editing. Privacy concerns are huge. AI tools rely on user data to function effectively, and companies collect vast amounts of information from searches, photos, and even social media posts to train their algorithms. This data collection has raised concerns, especially when companies like Microsoft use LinkedIn content to train AI, prompting questions about where and how personal information is used. For users looking to maintain some control, there are a few options—though not uniformly across all platforms. Google offers a way to filter out AI-generated search summaries by selecting the “Web” tab, though there’s no permanent way to disable AI entirely. Users can also prevent Google from storing search data by managing settings on the “My Activity” page. Microsoft’s AI Copilot in the Edge browser can be turned off in settings, while LinkedIn users can opt out of having their posts used to train AI. Apple, promoting itself as a privacy-conscious tech giant, requires users to opt in to Apple Intelligence, which the company says was designed with privacy in mind. Unlike competitors, Apple says user data is not stored on its servers long-term. With AI likely to grow in popularity, the delicate balance between functionality, privacy, and user choice has become a key issue. As companies navigate this terrain, consumers may face not only adapting to the new technology but also demanding clearer boundaries in this new AI-driven world.

More headlines

Page1234567