Preliminary Post Incident Review Executive Summary - CrowdStrike
CrowdStrike has released an executive summary for the preliminary post incident review.
You can view the executive summary PDF here - crowdstrike.com
Preliminary Post Incident Review - CrowdStrike
CrowdStrike have provided a preliminary post incident review.
A full root cause analysis will be provided once the full investigation has been completed by CrowdStrike.
You can view the preliminary post incident review here - crowdstrike.com
Dashboard to Discover Affected Assets
CrowdStrike released a dashboard query to assist customers in finding assets that may be impacted by the malformed channel file. At Cythera, we have adapted this dashboard to provide additional counts and better visibility to host data to assist in remediation. This dashboard has been uploaded to all of our managed customers and is accessible using the following method.
- Login to the Falcon via falcon.crowdstrike.com
- Access the navigation menu by clicking the icon in the top left corner. Then click on Next-Gen SIEM -> under Log Management click on the menu item Dashboards
- In the search bar, copy and paste the following “cythera_hosts_possibly_impacted_by_windows_crashes” and click on the resulting dashboard.
- Read the Dashboard Details widget to learn how the dashboard functions.
Status Definitions
Status
Definition
OK
Asset is functioning as normal. No intervention required.
Check
Asset received the malformed channel file. Manual intervention may be required.
Verify
CrowdStrike has not been able to determine if the asset is in a normal or abnormal state.
Crowdstrike has begun dissecting the outage with their technical analysis here.
Microsoft has released a more automated recovery tool. You will still need the Bitlocker key for the device. Instructions HERE.
CrowdStrike Engineering has identified a content deployment related to this issue and reverted those changes. Ongoing Updates from Crowdstrike can be found HERE
Cythera have re-enabled auto updates for all our managed clients and they will automatically receive the correct update; No other action is required unless you have devices that have blue screened and are not resolved with a reboot.
- If you want to search for devices that have received a fix and will not be affected, from the Crowdstrike Falcon user interface you can run the following search query:
(#event_simpleName = * or #ecs.version = *) | ("C-00000291*.sys") and (CompletionEventId = "Event_ChannelDataDownloadCompleteV1") | groupBy([ComputerName])
If hosts are still crashing and unable to stay online to receive the Channel File Changes, the following steps can be used to workaround this issue:
Workaround Steps (This fix is only for machines that have blue screened and are not resolved with a reboot.) :
Note: Putting the host on a wired network (as opposed to WiFi) and using Safe Mode with Networking can help remediation. There are also reports that simply rebooting affected machines multiple times may allow the device to finally get the fixed patch and boot normally.
1. Boot Windows into Safe Mode or the Windows Recovery Environment
2. Navigate to the C:\Windows\System32\drivers\CrowdStrike directory
3. Locate the file matching “C-00000291*.sys”, and delete it.
4. Shutdown the host. Start host from the off state.
Note: Bitlocker-encrypted hosts may require a recovery key.
BitLocker recovery via GPO Document
BitLocker recovery via SCCM
BitLocker in Azure Additional Microsoft Azure Detail
Crowdstricke Support
This is a tested fix provided by Crowdstrike support. This just removes the patch with issues, your endpoint protection remains in place and functional.
Workaround Steps for public cloud or similar environment including virtual:
Option 1:
Detach the operating system disk volume from the impacted virtual server
Create a snapshot or backup of the disk volume before proceeding further as a precaution against unintended changes
Attach/mount the volume to to a new virtual server
Navigate to the %WINDIR%\\System32\drivers\CrowdStrike directory
Locate the file matching “C-00000291*.sys”, and delete it.
Detach the volume from the new virtual server
Reattach the fixed volume to the impacted virtual server
Option 2:
Roll back to a snapshot before 0409 UTC.
Workaround Steps for Azure via serial
Login to Azure console --> Go to Virtual Machines --> Select the VM
Upper left on console --> Click : "Connect" --> Click --> Connect --> Click "More ways to Connect" --> Click : "Serial Console"
Step 3 : Once SAC has loaded, type in 'cmd' and press enter.
type in 'cmd' command
type in : ch -si 1
Press any key (space bar). Enter Administrator credentials
Type the following:
bcdedit
/set {current} safeboot minimal
bcdedit
/set {current} safeboot network
Restart VM
Optional:
How to confirm the boot state? Run command:
wmic COMPUTERSYSTEM GET BootupState