#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cybersecurity

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
The Value of AI-Powered Identity

The Value of AI-Powered Identity

Oct 08, 2024 Machine Learning / Data Security
Introduction Artificial intelligence (AI) deepfakes and misinformation may cause worry in the world of technology and investment, but this powerful, foundational technology has the potential to benefit organizations of all kinds when harnessed appropriately. In the world of cybersecurity, one of the most important areas of application of AI is augmenting and enhancing identity management systems. AI-powered identity lifecycle management is at the vanguard of digital identity and is used to enhance security, streamline governance and improve the UX of an identity system. Benefits of an AI-powered identity AI is a technology that crosses barriers between traditionally opposing business area drivers, bringing previously conflicting areas together: AI enables better operational efficiency by reducing risk and improving security AI enables businesses to achieve goals by securing cyber-resilience AI facilitates agile and secure access by ensuring regulatory compliance AI and unifi
Agentic AI in SOCs: A Solution to SOAR's Unfulfilled Promises

Agentic AI in SOCs: A Solution to SOAR's Unfulfilled Promises

Sep 25, 2024 Artificial Intelligence / SOC Automation
Security Orchestration, Automation, and Response (SOAR) was introduced with the promise of revolutionizing Security Operations Centers (SOCs) through automation, reducing manual workloads and enhancing efficiency. However, despite three generations of technology and 10 years of advancements, SOAR hasn't fully delivered on its potential, leaving SOCs still grappling with many of the same challenges. Enter Agentic AI—a new approach that could finally fulfill the SOC's long-awaited vision, providing a more dynamic and adaptive solution to automate SOC operations effectively. Three Generations of SOAR – Still Falling Short SOAR emerged in the mid-2010s with companies like PhantomCyber, Demisto, and Swimlane, promising to automate SOC tasks, improve productivity, and shorten response times. Despite these ambitions, SOAR found its greatest success in automating generalized tasks like threat intel propagation, rather than core threat detection, investigation, and response (TDIR) workloads.
The Secret Weakness Execs Are Overlooking: Non-Human Identities

The Secret Weakness Execs Are Overlooking: Non-Human Identities

Oct 03, 2024Enterprise Security / Cloud Security
For years, securing a company's systems was synonymous with securing its "perimeter." There was what was safe "inside" and the unsafe outside world. We built sturdy firewalls and deployed sophisticated detection systems, confident that keeping the barbarians outside the walls kept our data and systems safe. The problem is that we no longer operate within the confines of physical on-prem installations and controlled networks. Data and applications now reside in distributed cloud environments and data centers, accessed by users and devices connecting from anywhere on the planet. The walls have crumbled, and the perimeter has dissolved, opening the door to a new battlefield: identity . Identity is at the center of what the industry has praised as the new gold standard of enterprise security: "zero trust." In this paradigm, explicit trust becomes mandatory for any interactions between systems, and no implicit trust shall subsist. Every access request, regardless of its origin,
LinkedIn Halts AI Data Processing in U.K. Amid Privacy Concerns Raised by ICO

LinkedIn Halts AI Data Processing in U.K. Amid Privacy Concerns Raised by ICO

Sep 21, 2024 Privacy / Artificial Intelligence
The U.K. Information Commissioner's Office (ICO) has confirmed that professional social networking platform LinkedIn has suspended processing users' data in the country to train its artificial intelligence (AI) models. "We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its U.K. users," Stephen Almond, executive director of regulatory risk, said . "We welcome LinkedIn's confirmation that it has suspended such model training pending further engagement with the ICO." Almond also said the ICO intends to closely keep an eye on companies that offer generative AI capabilities, including Microsoft and LinkedIn, to ensure that they have adequate safeguards in place and take steps to protect the information rights of U.K. users. The development comes after the Microsoft-owned company admitted to training its own AI on users' data without seeking their exp
cyber security

The State of SaaS Security 2024 Report

websiteAppOmniSaaS Security / Data Security
Learn the latest SaaS security trends and discover how to boost your cyber resilience. Get your free…
How to Investigate ChatGPT activity in Google Workspace

How to Investigate ChatGPT activity in Google Workspace

Sep 17, 2024 GenAI Security / SaaS Security
When you connect your organization's Google Drive account to ChatGPT, you grant ChatGPT extensive permissions for not only your personal files, but resources across your entire shared drive. As you might imagine, this introduces an array of cybersecurity challenges. This post outlines how to see ChatGPT activity natively in the Google Workspace admin console, and how Nudge Security can provide full visibility into all genAI integrations. Since launching ChatGPT in 2022, OpenAI has defied expectations with a steady stream of product announcements and enhancements. One such announcement came on May 16, 2024, and for most consumers, it probably felt innocuous. Titled  "Improvements to data analysis in ChatGPT," the post outlines how users can add files directly from Google Drive and Microsoft OneDrive. It's worth mentioning that other genAI tools like Google AI Studio and Claude Enterprise have also added similar capabilities recently. Pretty great, right? Maybe.‍ When you connec
Meta to Train AI Models Using Public U.K. Facebook and Instagram Posts

Meta to Train AI Models Using Public U.K. Facebook and Instagram Posts

Sep 17, 2024 Artificial Intelligence / Regulatory Compliance
Meta has announced that it will begin training its artificial intelligence (AI) systems using public content shared by adult users across Facebook and Instagram in the U.K. in the coming months. "This means that our generative AI models will reflect British culture, history, and idiom, and that U.K. companies and institutions will be able to utilize the latest technology," the social media behemoth said . As part of the process, users aged 18 and above are expected to receive in-app notifications starting this week on both Facebook and Instagram, explaining its modus operandi and how they can readily access an objection form to deny their data being used to train the company's generative AI models. The company said it will honor users' choices and that it won't contact users who have already objected to their data being used for their purpose. It also noted that it will not include private messages with friends and family, as well as information from accounts
Ireland's Watchdog Launches Inquiry into Google's AI Data Practices in Europe

Ireland's Watchdog Launches Inquiry into Google's AI Data Practices in Europe

Sep 12, 2024 Regulatory Compliance / Data Protection
The Irish Data Protection Commission (DPC) has announced that it has commenced a "Cross-Border statutory inquiry" into Google's foundational artificial intelligence (AI) model to determine whether the tech giant has adhered to data protection regulations in the region when processing the personal data of European users. "The statutory inquiry concerns the question of whether Google has complied with any obligations that it may have had to undertake an assessment, pursuant to Article 35[2] of the General Data Protection Regulation (Data Protection Impact Assessment), prior to engaging in the processing of the personal data of E.U./E.E.A. data subjects associated with the development of its foundational AI model, Pathways Language Model 2 (PaLM 2)," the DPC said . PaLM 2 is Google's state-of-the-art language model with improved multilingual, reasoning, and coding capabilities. It was unveiled by the company in May 2023. With Google's European headqu
Webinar: Learn to Boost Cybersecurity with AI-Powered Vulnerability Management

Webinar: Learn to Boost Cybersecurity with AI-Powered Vulnerability Management

Sep 02, 2024 Vulnerability Management / Webinar
The world of cybersecurity is in a constant state of flux. New vulnerabilities emerge daily, and attackers are becoming more sophisticated. In this high-stakes game, security leaders need every advantage they can get. That's where Artificial Intelligence (AI) comes in. AI isn't just a buzzword; it's a game-changer for vulnerability management. AI is poised to revolutionize vulnerability management in the coming years. It enables security teams to: Identify risks at scale: AI can analyze massive amounts of data to identify vulnerabilities that humans might miss. Prioritize threats: AI helps focus on the most critical vulnerabilities, ensuring resources are used effectively. Remediate faster: AI automates many tasks, allowing for quicker and more efficient remediation. AI isn't just about technology; it's about people. This webinar will delve into how security leaders can leverage AI to empower their teams and foster a culture of security. Learn how to tur
Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

Aug 27, 2024 AI Security / Vulnerability
Details have emerged about a now-patched vulnerability in Microsoft 365 Copilot that could enable the theft of sensitive user information using a technique called ASCII smuggling. " ASCII Smuggling is a novel technique that uses special Unicode characters that mirror ASCII but are actually not visible in the user interface," security researcher Johann Rehberger said . "This means that an attacker can have the [large language model] render, to the user, invisible data, and embed them within clickable hyperlinks. This technique basically stages the data for exfiltration!" The entire attack strings together a number of attack methods to fashion them into a reliable exploit chain. This includes the following steps - Trigger prompt injection via malicious content concealed in a document shared on the chat to seize control of the chatbot Using a prompt injection payload to instruct Copilot to search for more emails and documents, a technique called automatic too
Researchers Identify Over 20 Supply Chain Vulnerabilities in MLOps Platforms

Researchers Identify Over 20 Supply Chain Vulnerabilities in MLOps Platforms

Aug 26, 2024 ML Security / Artificial Intelligence
Cybersecurity researchers are warning about the security risks in the machine learning (ML) software supply chain following the discovery of more than 20 vulnerabilities that could be exploited to target MLOps platforms. These vulnerabilities, which are described as inherent- and implementation-based flaws, could have severe consequences, ranging from arbitrary code execution to loading malicious datasets. MLOps platforms offer the ability to design and execute an ML model pipeline, with a model registry acting as a repository used to store and version-trained ML models. These models can then be embedded within an application or allow other clients to query them using an API (aka model-as-a-service). "Inherent vulnerabilities are vulnerabilities that are caused by the underlying formats and processes used in the target technology," JFrog researchers said in a detailed report. Some examples of inherent vulnerabilities include abusing ML models to run code of the attacker
New UULoader Malware Distributes Gh0st RAT and Mimikatz in East Asia

New UULoader Malware Distributes Gh0st RAT and Mimikatz in East Asia

Aug 19, 2024 Threat Intelligence / Cryptocurrency
A new type of malware called UULoader is being used by threat actors to deliver next-stage payloads like Gh0st RAT and Mimikatz . The Cyberint Research Team, which discovered the malware, said it's distributed in the form of malicious installers for legitimate applications targeting Korean and Chinese speakers. There is evidence pointing to UULoader being the work of a Chinese speaker due to the presence of Chinese strings in program database (PDB) files embedded within the DLL file. "UULoader's 'core' files are contained in a Microsoft Cabinet archive (.cab) file which contains two primary executables (an .exe and a .dll) which have had their file header stripped," the company said in a technical report shared with The Hacker News. One of the executables is a legitimate binary that's susceptible to DLL side-loading, which is used to sideload the DLL file that ultimately loads the final stage, an obfuscate file named "XamlHost.sys" that&#
OpenAI Blocks Iranian Influence Operation Using ChatGPT for U.S. Election Propaganda

OpenAI Blocks Iranian Influence Operation Using ChatGPT for U.S. Election Propaganda

Aug 17, 2024 National Securit / AI Ethics
OpenAI on Friday said it banned a set of accounts linked to what it said was an Iranian covert influence operation that leveraged ChatGPT to generate content that, among other things, focused on the upcoming U.S. presidential election. "This week we identified and took down a cluster of ChatGPT accounts that were generating content for a covert Iranian influence operation identified as Storm-2035," OpenAI said . "The operation used ChatGPT to generate content focused on a number of topics — including commentary on candidates on both sides in the U.S. presidential election – which it then shared via social media accounts and websites." The artificial intelligence (AI) company said the content did not achieve any meaningful engagement, with a majority of the social media posts receiving negligible to no likes, shares, and comments. It further noted it had found little evidence that the long-form articles created using ChatGPT were shared on social media platforms.
How to Set up an Automated SMS Analysis Service with AI in Tines

How to Set up an Automated SMS Analysis Service with AI in Tines

Jul 22, 2024 Threat Detection / Employee Security
The opportunities to use AI in workflow automation are many and varied, but one of the simplest ways to use AI to save time and enhance your organization's security posture is by building an automated SMS analysis service. Workflow automation platform Tines provides a good example of how to do it. The vendor recently released their first native AI features , and security teams have already started sharing the AI-enhanced workflows they've built using the platform.  Tines' library of pre-built workflows includes AI-enhanced pre-built workflows for normalizing alerts, creating cases, and determining which phishing emails require escalations.  Let's take a closer look at their SMS analysis workflow, which, like all of their pre-built workflows, is free to access and import, and can be used with a free Community Edition account.  Here, we'll share an overview of the workflow, and a step-by-step guide for getting it up and running. The problem - SMS scam messages targeted at employees
Expert Insights / Articles Videos
Cybersecurity Resources