#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cybersecurity

machine learning | Breaking Cybersecurity News | The Hacker News

Category — machine learning
The Value of AI-Powered Identity

The Value of AI-Powered Identity

Oct 08, 2024 Machine Learning / Data Security
Introduction Artificial intelligence (AI) deepfakes and misinformation may cause worry in the world of technology and investment, but this powerful, foundational technology has the potential to benefit organizations of all kinds when harnessed appropriately. In the world of cybersecurity, one of the most important areas of application of AI is augmenting and enhancing identity management systems. AI-powered identity lifecycle management is at the vanguard of digital identity and is used to enhance security, streamline governance and improve the UX of an identity system. Benefits of an AI-powered identity AI is a technology that crosses barriers between traditionally opposing business area drivers, bringing previously conflicting areas together: AI enables better operational efficiency by reducing risk and improving security AI enables businesses to achieve goals by securing cyber-resilience AI facilitates agile and secure access by ensuring regulatory compliance AI and unifi
EPSS vs. CVSS: What’s the Best Approach to Vulnerability Prioritization?

EPSS vs. CVSS: What's the Best Approach to Vulnerability Prioritization?

Sep 26, 2024 Vulnerability Management / Security Automation
Many businesses rely on the Common Vulnerability Scoring System (CVSS) to assess the severity of vulnerabilities for prioritization. While these scores provide some insight into the potential impact of a vulnerability, they don't factor in real-world threat data, such as the likelihood of exploitation. With new vulnerabilities discovered daily, teams don't have the time - or the budget - to waste on fixing vulnerabilities that won't actually reduce risk. Read on to learn more about how CVSS and EPSS compare and why using EPSS is a game changer for your vulnerability prioritization process.  What is vulnerability prioritization? Vulnerability prioritization is the process of evaluating and ranking vulnerabilities based on the potential impact they could have on an organization. The goal is to help security teams determine which vulnerabilities should be addressed, in what timeframe, or if they need to be fixed at all. This process ensures that the most critical risks are mitigated b
The Secret Weakness Execs Are Overlooking: Non-Human Identities

The Secret Weakness Execs Are Overlooking: Non-Human Identities

Oct 03, 2024Enterprise Security / Cloud Security
For years, securing a company's systems was synonymous with securing its "perimeter." There was what was safe "inside" and the unsafe outside world. We built sturdy firewalls and deployed sophisticated detection systems, confident that keeping the barbarians outside the walls kept our data and systems safe. The problem is that we no longer operate within the confines of physical on-prem installations and controlled networks. Data and applications now reside in distributed cloud environments and data centers, accessed by users and devices connecting from anywhere on the planet. The walls have crumbled, and the perimeter has dissolved, opening the door to a new battlefield: identity . Identity is at the center of what the industry has praised as the new gold standard of enterprise security: "zero trust." In this paradigm, explicit trust becomes mandatory for any interactions between systems, and no implicit trust shall subsist. Every access request, regardless of its origin,
Researchers Identify Over 20 Supply Chain Vulnerabilities in MLOps Platforms

Researchers Identify Over 20 Supply Chain Vulnerabilities in MLOps Platforms

Aug 26, 2024 ML Security / Artificial Intelligence
Cybersecurity researchers are warning about the security risks in the machine learning (ML) software supply chain following the discovery of more than 20 vulnerabilities that could be exploited to target MLOps platforms. These vulnerabilities, which are described as inherent- and implementation-based flaws, could have severe consequences, ranging from arbitrary code execution to loading malicious datasets. MLOps platforms offer the ability to design and execute an ML model pipeline, with a model registry acting as a repository used to store and version-trained ML models. These models can then be embedded within an application or allow other clients to query them using an API (aka model-as-a-service). "Inherent vulnerabilities are vulnerabilities that are caused by the underlying formats and processes used in the target technology," JFrog researchers said in a detailed report. Some examples of inherent vulnerabilities include abusing ML models to run code of the attacker
cyber security

The State of SaaS Security 2024 Report

websiteAppOmniSaaS Security / Data Security
Learn the latest SaaS security trends and discover how to boost your cyber resilience. Get your free…
The AI Hangover is Here – The End of the Beginning

The AI Hangover is Here – The End of the Beginning

Aug 12, 2024 AI Technology / Machine Learning
After a good year of sustained exuberance, the hangover is finally here. It's a gentle one (for now), as the market corrects the share price of the major players (like Nvidia, Microsoft, and Google), while other players reassess the market and adjust priorities. Gartner calls it the trough of disillusionment , when interest wanes and implementations fail to deliver the promised breakthroughs. Producers of the technology shake out or fail. Investment continues only if the surviving providers improve their products to the satisfaction of early adopters.  Let's be clear, this was always going to be the case: the post-human revolution promised by the AI cheerleaders was never a realistic goal, and the incredible excitement triggered by the early LLMs was not based on market success.  AI is here to stay  What's next for AI then? Well, if it follows the Gartner hype cycle, the deep crash is followed by the slope of enlightenment where the maturing technology regains its footing, benefits
Safeguard Personal and Corporate Identities with Identity Intelligence

Safeguard Personal and Corporate Identities with Identity Intelligence

Jul 19, 2024 Machine Learning / Corporate Security
Learn about critical threats that can impact your organization and the bad actors behind them from Cybersixgill's threat experts. Each story shines a light on underground activities, the threat actors involved, and why you should care, along with what you can do to mitigate risk.  In the current cyber threat landscape, the protection of personal and corporate identities has become vital. Once in the hands of cybercriminals, compromised credentials and accounts provide unauthorized access to corporations' sensitive information and an entry point to launch costly ransomware and other malware attacks. To properly mitigate threats stemming from compromised credentials and accounts, organizations need identity intelligence. Understanding the significance of identity intelligence and the benefits it delivers is foundational to maintaining a secure posture and minimizing risk.  There is a perception that security teams and threat analysts are already overloaded by too much data. By these
Summary of "AI Leaders Spill Their Secrets" Webinar

Summary of "AI Leaders Spill Their Secrets" Webinar

Jul 19, 2024 Technology / Artificial Intelligence
Event Overview The " AI Leaders Spill Their Secrets " webinar, hosted by Sigma Computing, featured prominent AI experts sharing their experiences and strategies for success in the AI industry. The panel included Michael Ward from Sardine, Damon Bryan from Hyperfinity, and Stephen Hillian from Astronomer, moderated by Zalak Trivedi, Sigma Computing's Product Manager. Key Speakers and Their Backgrounds 1. Michael Ward Senior Risk Data Analyst at Sardine. Over 25 years of experience in software engineering. Focuses on data science, analytics, and machine learning to prevent fraud and money laundering. 2. Damon Bryan Co-founder and CTO at Hyperfinity. Specializes in decision intelligence software for retailers and brands. Background in data science, AI, and analytics, transitioning from consultancy to a full-fledged software company. 3. Stephen Hillion SVP of Data and AI at Astronomer. Manages data science teams and focuses on the development and scaling of
The Emerging Role of AI in Open-Source Intelligence

The Emerging Role of AI in Open-Source Intelligence

Jul 03, 2024 OSINT / Artificial Intelligence
Recently the Office of the Director of National Intelligence (ODNI) unveiled a new strategy for open-source intelligence (OSINT) and referred to OSINT as the "INT of first resort". Public and private sector organizations are realizing the value that the discipline can provide but are also finding that the exponential growth of digital data in recent years has overwhelmed many traditional OSINT methods. Thankfully, Artificial Intelligence (AI) and Machine Learning (ML) are starting to provide a transformative impact on the future of information gathering and analysis.  What is Open-Source Intelligence (OSINT)? Open-Source Intelligence refers to the collection and analysis of information from publicly available sources. These sources can include traditional media, social media platforms, academic publications, government reports, and any other data that is openly accessible. The key characteristic of OSINT is that it does not involve covert or clandestine methods of information gather
Google Introduces Project Naptime for AI-Powered Vulnerability Research

Google Introduces Project Naptime for AI-Powered Vulnerability Research

Jun 24, 2024 Vulnerability / Artificial Intelligence
Google has developed a new framework called Project Naptime that it says enables a large language model (LLM) to carry out vulnerability research with an aim to improve automated discovery approaches. "The Naptime architecture is centered around the interaction between an AI agent and a target codebase," Google Project Zero researchers Sergei Glazunov and Mark Brand said . "The agent is provided with a set of specialized tools designed to mimic the workflow of a human security researcher." The initiative is so named for the fact that it allows humans to "take regular naps" while it assists with vulnerability research and automating variant analysis. The approach, at its core, seeks to take advantage of advances in code comprehension and general reasoning ability of LLMs, thus allowing them to replicate human behavior when it comes to identifying and demonstrating security vulnerabilities. It encompasses several components such as a Code Browser tool
Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Jun 24, 2024 Artificial Intelligence / Cloud Security
Cybersecurity researchers have detailed a now-patched security flaw affecting the Ollama open-source artificial intelligence (AI) infrastructure platform that could be exploited to achieve remote code execution. Tracked as CVE-2024-37032 , the vulnerability has been codenamed Probllama by cloud security firm Wiz. Following responsible disclosure on May 5, 2024, the issue was addressed in version 0.1.34 released on May 7, 2024. Ollama is a service for packaging, deploying, running large language models (LLMs) locally on Windows, Linux, and macOS devices. At its core, the issue relates to a case of insufficient input validation that results in a path traversal flaw an attacker could exploit to overwrite arbitrary files on the server and ultimately lead to remote code execution. The shortcoming requires the threat actor to send specially crafted HTTP requests to the Ollama API server for successful exploitation. It specifically takes advantage of the API endpoint "/api/pull&
New Attack Technique 'Sleepy Pickle' Targets Machine Learning Models

New Attack Technique 'Sleepy Pickle' Targets Machine Learning Models

Jun 13, 2024 Vulnerability / Software Security
The security risks posed by the Pickle format have once again come to the fore with the discovery of a new "hybrid machine learning (ML) model exploitation technique" dubbed Sleepy Pickle. The attack method, per Trail of Bits, weaponizes the ubiquitous format used to package and distribute machine learning (ML) models to corrupt the model itself, posing a severe supply chain risk to an organization's downstream customers. "Sleepy Pickle is a stealthy and novel attack technique that targets the ML model itself rather than the underlying system," security researcher Boyan Milanov said . While pickle is a widely used serialization format by ML libraries like PyTorch , it can be used to carry out arbitrary code execution attacks simply by loading a pickle file (i.e., during deserialization). "We suggest loading models from users and organizations you trust, relying on signed commits, and/or loading models from [TensorFlow] or Jax formats with the from_
AI Company Hugging Face Detects Unauthorized Access to Its Spaces Platform

AI Company Hugging Face Detects Unauthorized Access to Its Spaces Platform

Jun 01, 2024 AI-as-a-Service / Data Breach
Artificial Intelligence (AI) company Hugging Face on Friday disclosed that it detected unauthorized access to its Spaces platform earlier this week. "We have suspicions that a subset of Spaces' secrets could have been accessed without authorization," it said in an advisory. Spaces offers a way for users to create, host, and share AI and machine learning (ML) applications. It also functions as a discovery service to look up AI apps made by other users on the platform. In response to the security event, Hugging Space said it is taking the step of revoking a number of HF tokens present in those secrets and that it's notifying users who had their tokens revoked via email. "We recommend you refresh any key or token and consider switching your HF tokens to fine-grained access tokens which are the new default," it added. Hugging Face, however, did not disclose how many users are impacted by the incident, which is currently under further investigation. It has
How to Build Your Autonomous SOC Strategy

How to Build Your Autonomous SOC Strategy

May 30, 2024 Endpoint Security / Threat Detection
Security leaders are in a tricky position trying to discern how much new AI-driven cybersecurity tools could actually benefit a security operations center (SOC). The hype about generative AI is still everywhere, but security teams have to live in reality. They face constantly incoming alerts from endpoint security platforms, SIEM tools, and phishing emails reported by internal users. Security teams also face an acute talent shortage.  In this guide, we'll lay out practical steps organizations can take to automate more of their processes and build an autonomous SOC strategy . This should address the acute talent shortage in security teams, by employing artificial intelligence and machine learning with a variety of techniques, these systems simulate the decision-making and investigative processes of human analysts. First, we'll define objectives for an autonomous SOC strategy and then consider key processes that could be automated. Next, we'll consider different AI and automation
Experts Find Flaw in Replicate AI Service Exposing Customers' Models and Data

Experts Find Flaw in Replicate AI Service Exposing Customers' Models and Data

May 25, 2024 Machine Learning / Data Breach
Cybersecurity researchers have discovered a critical security flaw in an artificial intelligence (AI)-as-a-service provider  Replicate  that could have allowed threat actors to gain access to proprietary AI models and sensitive information. "Exploitation of this vulnerability would have allowed unauthorized access to the AI prompts and results of all Replicate's platform customers," cloud security firm Wiz  said  in a report published this week. The issue stems from the fact that AI models are typically packaged in formats that allow arbitrary code execution, which an attacker could weaponize to perform cross-tenant attacks by means of a malicious model. Replicate makes use of an open-source tool called  Cog  to containerize and package machine learning models that could then be deployed either in a self-hosted environment or to Replicate. Wiz said that it created a rogue Cog container and uploaded it to Replicate, ultimately employing it to achieve remote code exec
Expert Insights / Articles Videos
Cybersecurity Resources