WorryFree Computers   »   [go: up one dir, main page]

BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How To Reduce Vulnerable Code Risks In AI-Generated Software

Forbes Technology Council

Bernd is the CTO and founder of Dynatrace, a unified observability and security company that helps simplify enterprise cloud complexity.

With the average person spending nearly seven hours a day online, organizations increasingly rely on digital services. They are therefore looking for ways to accelerate software-driven innovation to meet the needs of customers and employees.

To this end, many are exploring the potential for generative AI-enabled copilots to speed up digital services delivery. While these approaches have advantages, AI-generated code can behave in unexpected ways if developers don't thoroughly review it. Without effective controls, organizations adopting these practices could put the reliability and security of their software at risk.

The Risks Of AI-Generated Code

There are six key reasons why AI-generated code could make it more difficult to maintain the reliability and security of software over the long term and increase code quality issues in the immediate future.

1. It relies on probabilistic learning. AI-generated code is formed from a probabilistic combination of multiple sources of code learned previously, which may contain outdated content or not execute together. This issue has yet to become a significant problem, as AI has until now had good sources of human-curated input from coding sites such as Stack Overflow. However, as developers shift to AI-generated code, their motivation to keep these sites updated will likely deteriorate. Hence, the quality of input that AI learns from will degrade.

2. It can't take on all responsibility. In current delivery practices, developers are responsible for maintaining code quality, security and functionality. While it helps, generative AI doesn't remove this responsibility from developers because it can't ask itself, "Does this look right?" The risk is that with mounting workloads and the need to innovate faster, developers—already prone to human error—may take shortcuts and blindly trust the code generated by AI, making it easier for flaws or vulnerabilities to creep in.

3. It amplifies copy/paste issues. It's already acknowledged as bad practice for developers to speed up delivery by copying and pasting snippets of code, as it deteriorates maintainability and increases the risk of errors or vulnerabilities being replicated or overlooked. AI code generators amplify that process dramatically, as they essentially automate the copy/paste process at higher speeds. Research from GitClear reveals these AI-driven duplication issues are already a problem in open-source projects.

4. Time constraints remain. Already stretched development teams don't have time to manually check every line of code that generative AI creates. Over time, code quality is likely to degrade, as copilots support and even learn from bad habits such as code duplication and replicating issues from other repositories or libraries.

5. AI can't be trusted to review output. Some developers will attempt to manage the quality of AI-generated code using additional generative AI to scan, refactor and fix issues. While this may provide some tactical value in the future, these tools are subject to the same probabilistic learning issues as the original AI-assisted code generators.

6. It increases churn. Researchers at Cornell University have also highlighted that iterative use of AI can reduce the quality of output over time. This suggests that software quality will degrade as AI continually replicates patterns learned from each iteration of the same code. In turn, this will increase code churn—as the more it gets rewritten, the more bugs are introduced. GitClear highlights this further, forecasting that the volume of new code that developers roll back or update within the first two weeks of deployment will double in 2024 compared to before generative AI entered widespread use.

Protecting From Within

To address these risks, organizations should immunize their applications against vulnerable code and actively protect them from the unpredictability of AI-generated code. In the same way a person needs a healthy immune system to fight off infection, digital services need to automatically protect themselves from poor-quality code that weakens their security or reliability.

The most effective way to develop this immunity is to automate the process for detecting vulnerabilities and assessing the risk and then enforce policies that can block attacks from within the application, similar to the way a vaccine works in the human body. This prevents new workloads or self-inflicted vulnerabilities that reduce code quality or security from executing at runtime, thereby reducing the risks.

Boosting Digital Resilience

To further strengthen their defenses, organizations using AI-assisted development practices should enhance their security analytics practices by bringing in observability data to enable more effective threat detection, forensics and incident response.

With this additional data source, teams can detect new risks and attack vectors as they emerge and quickly understand the impact of any that make it through the organization's defenses. This is becoming critical with the emergence of new cybersecurity regulations such as the SEC's breach disclosure rules, which require organizations to report cyberattacks within just four days.

Organizations can make their security analytics capability more effective by driving it with multiple types of AI in a single framework. They should start with causal AI (very different from generative AI), which offers real-time visibility into events and code behavior across software components in context. Organizations should then establish a baseline that helps them manage production and security processes, making it easier to enforce governance policies.

Adding predictive AI to this framework can help forecast future changes in software behavior based on patterns in historical data so that developers can preempt problems. Combining different types of AI in this way provides the insight IT leaders need to determine whether they should trigger further preventive actions or adjust the baselines used to monitor "normal" systems behavior to better protect the organization.

Ensuring Digital Immunity During AI-Fueled Transformation

As they prepare for a future of AI-driven digital transformation, development teams need to ensure their services are resilient and secure by default. They need effective controls to prevent errors and vulnerabilities from slipping into live applications and protect the organization from damaging service outages.

Those who invest in building out their own digital immune system and harnessing advanced security analytics capabilities to maintain secure, adaptable and resilient services will be best placed to enjoy a prosperous AI-charged future.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on Twitter or LinkedInCheck out my website