Day One Exploits: How to Effectively Reduce the Threat
Cyber hygiene and patching are key measures towards protecting data and systems. However, it’s not always possible or practical to patch when vulnerabilities and associated patches are announced. This problem gives rise to day one exploits.
Day one exploits are responsible for attacks such as the recent Microsoft Exchange attack that compromised hundreds of thousands of organizations. This began as a zero-day exploit and was followed by numerous day one exploits once the vulnerabilities were announced. Day one exploits were also used by Iranian threat actors about a year ago to gain access to financial sector networks via published virtual private network (VPN) vulnerabilities.
Patching Systems – Not Always an Easy Fix
The crux of the problem are the hurdles organizations face with patching systems. This is what leads to intrusions on such a large scale. Additionally, once an organization is infiltrated, recovery can require system rebuilds, as was the case with recent attacks.
How can we build our infrastructure in such a way that patching is simplified and complete recovery is enabled?
Twenty years ago, at a previous employer, our email was Unix-based. The system administrators had the IMAP-based mail server set up with a mirror. The team could:
- Break the mirror
- Patch or even rebuild a system
- Restore screened content from backup
- Prepare the system to go back online
- Initiate a momentary outage to bring the clean system up
- Wipe the system that was still vulnerable
- Prepare the system to go back online
- Rejoin the mirror
This process allowed for the mail servers to be rebuilt or patched with a minimum of downtime and could also be followed for any other server configured this way.
Using “Cloud Native” Environments for Rapid Updates
Resiliency such as this is increasingly important with infiltrations that are difficult to detect. The underlying premise of patching systems to thwart vulnerabilities is, as we have established, already flawed. This leads to the question of “Have my systems been infiltrated?” Patching alone does not close any backdoors that may have been created by an attacker as a method of establishing persistence. Patching cycles can take time to catch up to the systems in question. An internal capability to manage and process system reimaging is a necessity.
For many services that can be run in virtual environments, following the cloud native architectural style, it is possible to move a workload to a new instance of a patched or rebuilt application or server. Application data is screened and verified prior to being restored to this new environment. DevOps practices such as decoupling of modules and mobility in cloud native environments ensure rapid updates are possible. This can be done without impact to the supported service.
This is critical to the resiliency needed to recover from today’s attacks. The restored or rebuilt system should be configured to meet policy requirements and best practices for security configurations.
Resources and Planning
Best practices for security configurations are intended to ensure system hygiene and reduce the chance of attacks. The CIS Benchmarks and CIS Controls provide guidance as to how you should prioritize policy and control implementations to reduce risk to your organization. There are over 100 CIS Benchmarks across more than 25 vendor families available for applications, operating systems, services, and devices.
Applications like Microsoft Exchange add some more complexity to patch and restore seamlessly, but this level of recoverability can be achieved with planning and the use of a database availability group (DAG). Planning is required to architect a network that is flexible enough to ease the recovery process. However, in some cases this level of resourcing is not always possible. If virtual environments can be used, moving workloads is an excellent current day option.
Zero Trust Architecture and DevOps
What if patching were less scary? We are all accustomed to testing patches prior to deploying updates to systems. This causes a delay in when the associated vulnerabilities can be mitigated, leaving the door open for day one exploits.
With consideration, the move toward pervasive zero trust architectures and DevOps processes could help reduce this problem. In DevOps, modules are minimized and referenced again rather than written again. There is a movement toward reducing the coupling of code to allow for faster updates beyond cloud native deployments. Applications have also steadily reduced any coupling to operating systems. This in turn minimizes any unforeseen impact of patches in different environments.
In zero trust architectures, applications or even components do not trust other applications or components, and perform verification they are as expected before authorizing access. This decoupling and reducing reliance on adjoining modules, components, applications, and operating systems will enable faster patching times. Ideally, vendors will embrace these concepts and make it possible for near-immediate patching without distributed testing at each site.
Establishing Resilience Against Day One Vulnerabilities
Resilient infrastructure coupled with lower risk patching from vendors will help close the attack window for day one vulnerabilities. If operating systems and application providers increasingly embrace DevOps principles, organizations will be able to patch systems more effectively. Until then, determine if and how your architecture can be more resilient. You’ll want to enable recovery of completely rebuilt systems that meet hygiene requirements.
Cloud-hosted environments often include this level of resiliency, especially if they are based on cloud native models. The threat landscape as indicated by recent attacks has demonstrated the need for this level of resiliency. It should be a priority for all organizations to determine the best risk mitigation strategy.
For additional information on the concepts described, see Transforming Information Security: Optimizing Five Concurrent Trends to Reduce Resource Drain sections 7.2 and 7.3.
About the Author
Kathleen Moriarty
Chief Technology Officer
Kathleen Moriarty, Chief Technology Officer, Center for Internet Security has over two decades of experience. Formerly as the Security Innovations Principal in Dell Technologies Office of the CTO, Kathleen worked on ecosystems, standards, and strategy. During her tenure in the Dell EMC Office of the CTO, Kathleen had the honor of being appointed and serving two terms as the Internet Engineering Task Force (IETF) Security Area Director and as a member of the Internet Engineering Steering Group from March 2014-2018. Named in CyberSecurity Ventures, Top 100 Women Fighting Cybercrime. She is a 2020 Tropaia Award Winner, Outstanding Faculty, Georgetown SCS.
Kathleen achieved over twenty years of experience driving positive outcomes across Information Technology Leadership, IT Strategy and Vision, Information Security, Risk Management, Incident Handling, Project Management, Large Teams, Process Improvement, and Operations Management in multiple roles with MIT Lincoln Laboratory, Hudson Williams, FactSet Research Systems, and PSINet. Kathleen holds a Master of Science Degree in Computer Science from Rensselaer Polytechnic Institute, as well as, a Bachelor of Science Degree in Mathematics from Siena College.