Support Login 0800 046 9957

What Should You Do After a Cyber Breach (and How To Stop It Happening Again)

Josh Kirk
  • 18 Mar 2026
  • 8 min read

Introduction

If you’re reading this in the middle of an incident, start with “Part 1: What to do immediately”. If you’re reviewing your readiness, you can skip to “Part 2: How to reduce the chances of a repeat breach”.

A cyber breach is one of those moments where the pressure to “do something” can be intense, but the wrong action taken quickly can make recovery slower, more expensive, or harder to evidence later. For UK SMEs in the 50–150 user bracket, the goal is usually the same: contain the impact, regain control of identity and access, keep the business operating where possible, and then fix the underlying gaps that allowed it to happen in the first place.

The quick answer

Treat the first phase as containment and control, not tidy‑up. Secure identity, stop further spread, preserve enough evidence to understand what happened, and confirm you can restore safely. Once things are stable, use the incident as a structured learning moment, because repeat breaches are often caused by the same small set of weaknesses being left unaddressed.

If your cyber breach is more significant and you cannot gain access to your systems then there will likely be some significant different steps to take.

Part 1 (Immediate): What to do if you think you’ve been breached

You might be in one of two situations. Either you have strong signals you are compromised, which can mean a variety of things, (a ransom note, suspicious logins, payment diversion, unusual admin activity), or you have something that “feels off” (slow systems, locked accounts, reports of weird emails). In both cases, the early decisions are similar: contain, confirm, and avoid turning a bad day into a long week.

1) First, decide whether you are containing or investigating

This sounds like semantics, but it stops a common mistake. Many businesses jump straight to investigating and cleaning up, and in doing so they unintentionally remove the trail that would have helped them understand the scope (the attack vector, root cause, and timeline could be required or wanted by a cyber insurer). Containment is about stopping the bleeding. Investigation is about understanding what happened and how far it reached. You can do both, but you should be clear what the priority is for the next hour.

If you are actively losing control (for example, more accounts are being compromised, new inbox rules are appearing, or systems are being encrypted), containment should take precedence. If things seem stable but suspicious, you can take a slightly more measured approach, but you still want to move quickly around identity and admin access.

2) Regain control of identity and admin access first

For most SMEs, identity is the centre of gravity. If an attacker has access to email accounts, especially privileged accounts, they can pivot into file access, password resets, supplier fraud, and wider compromise.

Start by focusing on the accounts that can change everything else: global administrators, privileged IT accounts, finance accounts, and anyone with access to shared mailboxes or payments. In many cases you’ll want to revoke active sessions and reset credentials, but do it in a controlled way and document what you change as you go.

If your environment uses Microsoft 365, this often means looking for unusual sign‑ins, unexpected admin role changes, and suspicious mail rules or forwards. If you are not sure how to do that safely, it is usually better to get help early rather than experiment under pressure.

3) Contain spread, but don’t destroy evidence unnecessarily

There is a balance here. In a serious incident you may need to isolate devices to prevent spread, but sweeping actions like wiping machines, deleting mailboxes, or ripping out tooling can remove the evidence you later need to confirm what happened, respond to an insurer, or learn properly from the event.

A sensible approach for most SMEs is to isolate the most suspicious endpoints first, and to capture basic facts as you go. That “basic facts” list does not need to be a forensic report. It can be as simple as noting times, accounts involved, what you observed, and what actions you took. This is often where specialist support can really help.

4) Confirm you can restore safely (and be careful with assumptions)

A lot of incident response fails at the point where a business discovers that “we have backups” is not the same as “we can restore cleanly”. If the incident involves ransomware or suspected lateral movement, your backup strategy becomes central. You need to understand what you can restore, how far back you may need to go, and whether your backups are likely to be affected.

This is where SMEs sometimes make a well‑intentioned but risky decision: restoring quickly without validating the infection path, only to reinfect the restored systems. You do not need a perfect investigation before you restore, but you do need enough understanding to avoid replaying the same compromise.

If you aren’t in the middle of a breach, this is a good reminder to review your business continuity plan, test it, workshop it and remind the team where a physical copy exists. If you would like help workshopping your business continuity plan then get in touch with us.

5) Communicate early, but keep it coordinated

In the first hours, communication should be calm and controlled. Staff need to know what to do and what not to do (for example, not clicking suspicious emails, not approving MFA prompts they did not initiate, and reporting anything unusual). Leadership needs a clear summary: what’s affected, what’s stable, what the next decision points are, and when the next update will be.

There may also be external stakeholders to consider. That can include insurers, key suppliers, and in some cases legal or compliance advisors. It’s important not to treat this article as legal guidance, because notification requirements depend on the nature and scope of the incident, but it is fair to say this: if you have cyber insurance, you typically want to notify them early and follow their process, because it can affect how support is provided later.

A quick, realistic “first hours” sequence (without turning this into a checklist article)

People often ask for a strict minute‑by‑minute plan. In reality, the order varies depending on what’s happening, but the flow below is a reliable mental model:

First, stabilise identity and privileged access so the attacker can’t keep expanding their control. Second, isolate what looks compromised while preserving enough visibility to understand scope. Third, confirm your restore path and start planning recovery in a way that won’t reintroduce the same issue.

If your incident is clearly limited to email, you may focus heavily on mail rules, sign‑ins, conditional access, and user resets. If endpoints are involved, you will likely need a stronger containment posture, and your ability to recover will depend on whether you have modern endpoint detection tooling and clean recovery images. If servers or line‑of‑business systems are affected, the recovery plan becomes more complex, and it is often where external support pays for itself because speed and sequencing matter.


Organise An External Penetration Test Here

Understand any external weaknesses before an attacker with an external penetration test.


Part 2 (Prevention): How to reduce the chances of a repeat breach

If you have just been through an incident, or you’re trying to ensure you never have to, this is the part to read. Most repeat breaches are not caused by the same “one click”. They happen because the conditions that allowed the incident remain in place: too much access, weak identity controls, poor visibility, and backups that exist but aren’t routinely proven.

1) Treat identity as a security boundary, not just a login

For 50–150 user businesses (you could even stretch this to any SME), identity security is often the highest return improvement. It’s also one of the easiest to underestimate because it feels like “IT plumbing” rather than a business control.

A strong identity baseline typically includes multi‑factor authentication for all users (a standard that Cyber Essentials now expects), stricter controls for admins, and policies that reduce risky access patterns. It also includes the cultural side: training staff to treat unexpected MFA prompts as a red flag, because “push fatigue” is a real attack path.

The difference between “we have MFA” and “we have resilient identity control” is usually governance: are there still legacy admin accounts, shared credentials, unreviewed permissions, or exemptions that were created years ago and never revisited?

2) Reduce privilege and clean up permissions (especially in SharePoint and Teams)

A lot of SMEs learn the hard way that security is not only about keeping attackers out, but also about controlling what they can see once they get in.

After a breach, it’s worth reviewing the permission model for your most sensitive information. Many organisations have file shares and SharePoint sites where permissions grew organically, which is understandable, but it creates a broad blast radius. It’s one of the reasons breaches feel catastrophic: too much is accessible to too many accounts.

You don’t need to “rebuild SharePoint” to improve this. Often the first step is simply identifying the highest-risk areas and tightening access gradually, while also creating clearer ownership so permissions don’t drift again.

3) Make endpoint protection and patching measurable

A common post-incident problem is that businesses want to “improve security” but can’t measure whether it is improving. That’s where endpoint detection and response, often shortened to EDR, comes in. In plain English, EDR is tooling that helps detect suspicious behaviour on devices and respond quickly, rather than relying on traditional antivirus alone.

Just as importantly, patching needs to be something you can evidence, not something you assume. That doesn’t mean forcing updates during working hours or creating disruption, but it does mean having visibility: what percentage of devices are current, which devices are unsupported, and where risk is accumulating. Use this as your reminder to check if you have any devices that are End of Life in terms of their support.

4) Prove backup and recovery, don’t just run backups

If you do one thing differently after an incident, make it this: move from “backup exists” to “recovery is practiced”.

That involves restore testing on a cadence that suits your business. For some SMEs it’s monthly for critical systems, with a quarterly deeper test, for others it’s quarterly with clear evidence logs. The right answer depends on how quickly you need to recover and how complex your environment is, but the principle remains the same. If you can’t restore with confidence, you don’t have a recovery capability, you have hope.

5) Introduce an incident playbook that’s realistic for your size

SMEs don’t need enterprise bureaucracy, but they do benefit from a small, clear incident playbook that answers basic questions: who makes decisions, who speaks to staff, who speaks to customers if needed, how systems are isolated, and where evidence is stored.

This matters because the worst time to decide roles is during the incident itself. It also helps reduce panic, which is usually the hidden driver of poor decisions.

One thing we also see frequently is businesses have got ahead and they have an incident response plan and it’s stored on a device somewhere but nobody has stopped to think, if you no longer have access to that system during the breach, what do you do then. What if ‘Chris’ is the person that owns the plan and leads it but the attackers have struck whilst he is on holiday. This is a full business for cyber attackers so they do their research on you just like you would on a key prospect you are trying to win.

6) If you use an MSP (or are considering one), focus on evidence and resilience

This is also where the “single person” model (or small team) often shows its limits. A lone internal IT generalist can be excellent, but it’s hard for one person to be simultaneously deep in security, Microsoft 365 governance, backups, networks, and incident response practice, while also doing day-to-day support. That’s not a criticism, it’s simply the reality of breadth.

One of the sustainable advantages of a good MSP is the depth and coverage across those disciplines, in the same way many businesses choose a marketing agency to access specialists across PPC, SEO, design and web rather than expecting one person to master everything at once. The right MSP approach is not about outsourcing responsibility. It’s about making resilience a property of the organisation, not the heroics of an individual.

The difference between the Marketing example and IT is the worst you can really do by dabbling in PPC with little to no experience is you waste the budget given to your ads. If you dabble in cybersecurity with little to no experience and get it wrong, the consequences can be disastrous.

An example of the difference between “fixed” and “improved”

A 95-user firm noticed unusual emails being sent from a senior account and a supplier queried a changed bank detail request. The internal team reset passwords quickly and thought the issue was solved. Two weeks later, the same pattern returned, because the root cause wasn’t addressed: MFA was enabled for some users but not enforced consistently, mailbox forwarding rules weren’t audited, and admin roles were broader than necessary.

When they approached the incident as both response and learning, the picture changed. Identity controls were tightened and standardised, privileged accounts were separated from day-to-day accounts, endpoint visibility was improved, and backup restore testing was scheduled and documented. The business didn’t just “get through it”. It reduced the chance of being back in the same situation again, and leadership regained confidence because they could see evidence of improvement rather than simply being told “it’s handled”.

FAQs

How do I know if we’ve definitely been breached?

You may not know immediately, and that’s normal. Treat strong indicators seriously (unusual sign-ins, unexpected MFA prompts, new email rules, unknown admin changes, encryption activity) and move into containment and verification rather than waiting for certainty.

Should we turn everything off?

This isn’t a straight answer. Sometimes isolating systems is appropriate, especially if you suspect active spread, but turning everything off indiscriminately can remove visibility and disrupt recovery. A controlled approach is usually better: isolate what appears compromised and focus on identity control first.

What should we tell staff?

Keep it simple and practical. Tell staff what to do if they notice unusual activity, remind them not to approve unexpected MFA prompts, and give them one place to report concerns. Avoid speculation until you know more.

Do we need to report the breach?

It depends on what data was involved and the scope of the incident. You should consider your legal obligations with appropriate advice, and if you have cyber insurance you should typically follow their incident process early. We are not legal or financial advice.

What are the most common causes of repeat breaches?

In SMEs it’s usually a combination of inconsistent identity controls, over-broad permissions, limited endpoint visibility, and backups that haven’t been routinely proven through restore testing. The trigger point though is often humans, we are still some of the easiest points of failure.

We use third-party cookies to personalise content and analyse site traffic.

Learn more