Share

Diagnosing Boot Time Delays with Bootchart, systemd-analyze, and U-Boot Timestamps in Linux

The boot process of a Linux-based system is often perceived as a simple, linear chain of events — the power is applied, the bootloader runs, the kernel initializes, the init system launches services, and eventually, a login prompt or graphical environment appears. In practice, however, the boot process is far from simple. It’s an intricate orchestration of multiple hardware and software components, all interdependent and subject to various delays that can easily accumulate into noticeable lag. On embedded systems, servers with strict uptime SLAs, and consumer devices where responsiveness is paramount, every fraction of a second matters. Diagnosing boot time delays in Linux is therefore not just about curiosity — it’s about enabling faster device readiness, improving the user experience, and, in many industrial and automotive scenarios, meeting contractual boot time requirements.

In the Linux ecosystem, three powerful tools form the backbone of serious boot time analysis: Bootchart, systemd-analyze, and U-Boot timestamps. Each of these tools operates at a different stage of the boot process and offers distinct insights into what’s happening under the hood. Bootchart provides a visual representation of process execution and system activity from the moment the kernel starts until the system is fully up. systemd-analyze offers a high-level and detailed breakdown of the time spent in firmware, the bootloader, the kernel, and user space, along with dependency graphs that reveal service-level bottlenecks. Meanwhile, U-Boot timestamps allow developers to measure delays that occur before the kernel even takes control, focusing on hardware initialization and bootloader execution paths. Used together, they offer a complete timeline from power-on to login, enabling engineers to pinpoint slow stages and apply targeted optimizations.


Understanding the Boot Timeline

To properly diagnose and optimize boot time, one must first understand the stages of the Linux boot sequence. On systems using U-Boot as a bootloader, the journey begins the moment the device powers up. The CPU starts executing code from a predefined address, typically in internal ROM, where the SoC’s boot ROM code initializes the bare minimum hardware needed to load the bootloader. U-Boot then takes over, performing a series of hardware initializations: setting up DRAM, configuring clocks, enabling serial output, and preparing storage interfaces like eMMC, NAND, or NOR flash. Each of these steps can be a source of delay, particularly if hardware components require long stabilization times or firmware blobs need to be loaded.

Once U-Boot hands control to the Linux kernel (via a bootm or bootz command), the kernel begins decompressing itself, initializing memory subsystems, enumerating devices, and probing drivers. Following this, the init system — in modern Linux distributions often systemd — takes over to launch system services. This final user-space initialization phase can be surprisingly slow if services are started sequentially, if dependency trees are inefficiently structured, or if background jobs are blocking other processes from starting.

By the time the login shell or graphical display manager appears, the system has gone through hundreds of initialization steps. Without proper measurement tools, it’s nearly impossible to know where time is being lost — hence the need for Bootchart, systemd-analyze, and U-Boot timestamps.


Bootchart: Visualizing the Boot Process

Bootchart is a powerful tool for capturing and visualizing the execution of processes and system resource usage during the boot sequence. It works by collecting CPU, disk, and process data, which is then rendered into a graphical timeline. This timeline reveals when processes start and stop, how much CPU they consume, and how disk I/O correlates with CPU activity. By spotting unusually long bars or idle gaps, developers can identify inefficiencies.

In modern systemd-based systems, Bootchart can be enabled without patching the kernel or adding extra init scripts. For example, to enable Bootchart with systemd, you can use:

Bash
sudo systemctl enable bootchart
sudo systemctl start bootchart

Alternatively, Bootchart can be started during boot by appending the following to the kernel command line:

Bash
init=/usr/lib/systemd/systemd-bootchart

The resulting logs are stored in /var/log/bootchart.tgz, which can be visualized with tools like bootchart-viewer or web-based renderers.

The beauty of Bootchart lies in its ability to tell a story at a glance. You might notice, for instance, that certain services are being launched far earlier than needed, consuming CPU time that could otherwise be allocated to critical tasks. Or you might see that disk I/O spikes during early boot are starving CPU-bound tasks, suggesting that I/O scheduling could be tuned. In some cases, Bootchart reveals “hidden” services or scripts you didn’t even know were running during boot.


systemd-analyze: Breaking Down Boot Phases

While Bootchart excels at process-level visualization, systemd-analyze provides a high-level and structured breakdown of where time is spent in the boot process. Running:

Bash
systemd-analyze

will produce output like:

Bash
Startup finished in 2.431s (kernel) + 3.987s (userspace) = 6.418s

This alone is useful, but systemd-analyze goes deeper. The blame subcommand ranks services by their startup time:

Bash
systemd-analyze blame

This produces a sorted list of services, allowing you to quickly see which ones are the main culprits in slowing down the boot. Often you’ll find that a single poorly configured service — such as a network manager waiting for DHCP leases or a misconfigured database daemon — accounts for a large percentage of the delay.

The critical-chain subcommand offers an even more valuable perspective:

Bash
systemd-analyze critical-chain

This shows the chain of services that directly determine how soon the system can reach the target boot milestone (such as default.target). Unlike the blame output, which might include long-running services that don’t block other tasks, critical-chain focuses on the blocking path. Optimizing services in this chain yields the most noticeable improvements.

For graphical analysis, systemd-analyze plot generates an SVG visualization of the boot process:

Bash
systemd-analyze plot > boot.svg

Opening boot.svg in a web browser allows you to visually inspect parallelization and see which units are waiting on which dependencies. This helps identify cases where a service is configured to start only after another has fully completed, even though they could run concurrently.


U-Boot Timestamps: Measuring Early Boot Delays

While Bootchart and systemd-analyze are excellent for analyzing delays after the kernel starts, they cannot tell you much about what happens in the bootloader. This is where U-Boot’s timestamp functionality becomes invaluable. By enabling CONFIG_BOOTSTAGE in U-Boot’s build configuration, you can record the time taken at various points in the bootloader’s execution.

For example, in U-Boot’s source configuration (include/configs/board_name.h or via menuconfig):

Bash
CONFIG_BOOTSTAGE=y
CONFIG_BOOTSTAGE_REPORT=y

Once enabled, you can insert calls like:

Bash
bootstage_mark(BOOTSTAGE_ID_START, "board_init_f");

at critical points in the bootloader code. When U-Boot finishes, it will print a detailed table of how long each stage took, down to the millisecond. This allows you to identify bottlenecks in DRAM training, flash reading, peripheral initialization, and even U-Boot scripting.

On NAND/NOR-based embedded systems, you might discover that reading the kernel image takes several seconds because of slow storage or poorly optimized read routines. In such cases, enabling faster read modes (like Quad SPI for NOR) or compressing the kernel image differently can make a dramatic difference.


Building the Full Boot-Time Picture

The real power of these tools emerges when they are used together. By correlating U-Boot timestamps with early kernel printk logs and post-kernel data from Bootchart and systemd-analyze, you can assemble a complete map of the boot process from power-on to user space readiness. This end-to-end perspective is essential for diagnosing complex delays.

Consider an embedded system where the total boot time is 12 seconds. Using U-Boot timestamps, you might determine that 4 seconds are spent in the bootloader, with 2 seconds lost to DRAM initialization. Bootchart might show that once the kernel starts, the CPU is idle for 1.5 seconds waiting for disk I/O to complete, while systemd-analyze blame reveals that a network service is taking 2 seconds to start because it’s waiting for DHCP. With all this data, you can optimize in three directions simultaneously: tuning DRAM initialization parameters in U-Boot, preloading certain files into RAM to reduce I/O latency, and reconfiguring the network service to start in parallel or use static addressing.


Useful Commands for Boot Time Diagnosis

Some key commands you’ll use repeatedly in this process include:

Bash
# Show total boot time breakdown
systemd-analyze

# List services sorted by startup duration
systemd-analyze blame

# Show critical startup chain
systemd-analyze critical-chain

# Generate a boot process visualization
systemd-analyze plot > boot.svg

# Enable and start bootchart in systemd
sudo systemctl enable bootchart
sudo systemctl start bootchart

# Run bootchart manually during boot (kernel cmdline)
init=/usr/lib/systemd/systemd-bootchart

# View bootchart output
bootchart-viewer /var/log/bootchart.tgz

In U-Boot:

Bash
=> bootstage report

will display the recorded stage timings.


Beyond Measurement: Optimization Strategies

Diagnosing boot delays is only the first step; the ultimate goal is optimization. Armed with data from Bootchart, systemd-analyze, and U-Boot, developers can implement a variety of strategies:

  • Parallelizing services where dependencies allow, to reduce sequential wait times.
  • Deferring non-critical services until after the system reaches a usable state.
  • Optimizing kernel command line parameters to skip unneeded probes or enable faster modes.
  • Reducing initramfs size or placing frequently accessed binaries in faster storage.
  • Tuning U-Boot scripts to minimize redundant hardware checks.

For embedded devices, the combination of early-boot optimizations in U-Boot and late-boot improvements in systemd can often cut boot times in half.