How to increase cybersecurity in the age of AI PCs
As AI PCs become more widespread in business environments, they create new attack surfaces for cybercriminals. This article highlights why business leaders must act now to reframe their cybersecurity strategy to address the unique risks introduced by AI hardware and workloads. Read the article to get the full perspective, and connect with Bubble Cloud/ Bubble Social Media Marketing for a security assessment built for the AI PC era.
Frequently Asked Questions
Why do AI PCs require different cybersecurity measures than traditional PCs?
AI PCs introduce a different risk profile because they run AI workloads locally on the device using a neural processing unit (NPU), instead of sending everything to the cloud or a central server.
Key differences and risks:
1. **More sensitive data stored on the device**
- AI PCs are designed to process data where it’s generated. That means more proprietary and sensitive information (customer data, financial data, internal documents) lives directly on laptops and desktops.
- If a device is lost, stolen, or compromised, the potential exposure is higher than with a thin client that relies mostly on the cloud.
2. **Model inversion attacks**
- Attackers can try to infer the training data behind an AI model by analyzing its outputs.
- Example: A wealth management firm trains an AI model on client portfolios. If a hacker performs a model inversion attack, they might reconstruct who the clients are and where their assets are held.
3. **Data poisoning risks**
- Cybercriminals can inject false or malicious data into training sets or into AI-powered tools like chatbots.
- This can cause “hallucinations” (incorrect or misleading outputs) or intentionally biased responses that misguide employees or customers.
4. **Expanded software attack surface**
- Employees can download AI apps and tools directly to their AI PCs. Not all of these apps are built with strong security practices.
- While major providers like OpenAI and Google tend to be more trustworthy, many smaller or unknown apps may introduce malware or data leakage risks.
5. **Faster adoption, evolving controls**
- AI PCs are scaling quickly: worldwide shipments are expected to reach **114 million units this year**, and they’re projected to represent **43% of all PC shipments in 2025**.
- By next year, they’re anticipated to be the only type of PC sold to large enterprises. Security teams are having to adapt controls and policies quickly to keep up.
In short, AI PCs don’t replace existing cybersecurity concerns; they reshape them. The same core principles still apply, but they need to be extended to cover local AI workloads, model integrity, and a broader ecosystem of AI applications on each device.
How should we approach buying AI PCs to reduce cybersecurity risk?
Cybersecurity for AI PCs starts before the devices ever reach your employees. The procurement process is a key control point.
Practical steps to take when buying AI PCs:
1. **Buy only from trusted, reputable sources**
- Purchase directly from PC manufacturers, authorized wholesale distributors, or well-established partners.
- Avoid gray-market or unknown resellers, where there’s a higher risk of pre-installed malware or tampered components.
2. **Verify hardware integrity**
- Work with vendors that offer hardware verification or supply-chain security features.
- Example: Dell uses a secured component verification process and issues certificates so customers can confirm that the hardware in each AI PC matches what was shipped and hasn’t been altered.
3. **Assess vendor security posture**
- Ask vendors how they secure their manufacturing, firmware, and software update processes.
- Request documentation on their security certifications, incident response processes, and how they handle vulnerabilities.
4. **Standardize on approved configurations**
- Define a set of approved AI PC models and configurations that meet your security requirements (e.g., secure boot, disk encryption support, hardware-based security modules).
- Limit one-off purchases that bypass IT review.
5. **Plan for lifecycle management**
- Ensure your IT team can centrally manage updates, patches, and security policies for AI PCs.
- Confirm that your chosen devices integrate with your existing endpoint management and security tools.
By treating AI PC procurement as a security decision—not just a hardware purchase—you reduce the chance of compromised devices entering your environment and create a more trustworthy foundation for running local AI workloads.
What internal practices help secure AI PCs once employees start using them?
Once AI PCs are in employees’ hands, security depends on a mix of user behavior, technical controls, and clear policies.
Key practices to consider:
1. **Employee training focused on AI risks**
- Explain how AI PCs differ from traditional devices: more local data, AI models running on-device, and new types of attacks (like model inversion and data poisoning).
- Train employees to:
- Be cautious about what data they feed into AI tools.
- Recognize suspicious behavior or unexpected AI outputs.
- Report potential incidents quickly.
2. **Emphasize speed of communication and response**
- Reaction time is critical. If there’s a suspected breach or a compromised AI tool, IT needs to reach employees fast.
- Use channels like Slack, Teams, or email with clear, concise alerts and instructions.
- Run drills or simulations so employees know how to respond in real situations.
3. **Control which AI applications can be used**
- Create an approved list of AI apps and services (e.g., vetted tools from major providers like OpenAI or Google).
- Block or restrict untrusted AI apps where possible, especially those that request broad access to files, cameras, or microphones.
- Educate employees to check who built an app and whether it’s endorsed by IT before installing it.
4. **Use virtual environments on mixed-use devices**
- For personal devices that employees also use for work, IT can set up virtual environments or secure workspaces.
- These environments isolate company-approved software and data from personal apps, reducing the risk that malware from an untrusted app can interact with corporate systems.
5. **Apply familiar security principles to AI PCs**
- Even though AI PCs are reshaping the endpoint landscape, many core security practices still apply:
- Strong identity and access management.
- Endpoint protection and monitoring.
- Disk encryption and secure boot.
- Regular patching and updates.
- The difference is that these controls now need to account for local AI workloads and data.
6. **Balance access with protection**
- AI PCs can make employees more productive by giving them faster access to data and AI tools.
- At the same time, broad access increases the impact if an account or device is compromised.
- Work with business leaders to define which roles need access to which data and AI capabilities, and enforce that through role-based access controls.
By combining user education, rapid communication, application governance, and proven security fundamentals, you can reimagine endpoint security for AI PCs without slowing down your teams.


