Extended disk access on newly created ext4 partitions

i recently purchased a new 4tb wd external hard drive and was a bit perplexed when the disk access light continued to blink for an extended time after formatting the entire disk to ext4 in gparted. i could see in conky that data was being written, but since i hadn’t added added any files i was curious about what was going on. it finally occurred to me to check with iotop -o.

the active process was listed as ext4lazyinit. after a web search, i came across this page with some helpful info:

When creating an Ext4 file system, the existing regions of the inode tables must be cleaned (overwritten with nulls, or “zeroed”). The “lazyinit” feature should significantly accelerate the creation of a file system, because it does not immediately initialize all inode tables, initializing them gradually instead during the initial mounting process in background (from Kernel version 2.6.37).[18][19] Regarding this see the extracts from the mkfs.ext4 man pages:[20]

today i finally got around to reformatting my older 2tb wd external drive and enough time had passed that i had forgotten about the lazyinit process. i was trying to run smart tests on the drive, but the short test with an estimated run time of 2 minutes stalled twice at 90% and ran well over that time until i aborted them manually. that’s when i noticed the continuous blinking of the access light and ran iotop.

sure enough, ext4lazyinit was running. when it finished, i was able to run the short test with no issues.

for reference, the 4tb drive with sata 3.1 6.0 Gb/s took about 30 minutes to finish and the 2tb drive with sata 3.0 3.0 Gb/s took a little less than an hour.


I had a similar problem a while ago but im going to install a 3tb wd drive so could you answer this. 1.Im a newbie to linux so could you explain this in simple terms so when i install / format the drive i don`t have any problems.

i think that would depend on how you plan on formatting the drive. if you can share what you were thinking of doing and some details about your drive, i might be able to give a more specific answer.

for example, how long this initialization process takes may depend on what version of usb your new disk uses. in general i would say be prepared to let the disk just go through this process the first time you mount it after formatting.


The initialization process is a recoverable background task, it’s meant to be deferred and to take its time so that, as the ext4 man page puts it,

This minimizes the impact on system performance while the filesystem’s inode table is being initialized.

As long as the filesystem is created with the uninit_bg flag enabled (which it is by default), it’s safe for the initialization to be put off indefinitely. So, you don’t have to worry about waiting for the lazy init process to finish or anything like that — if it doesn’t finish before the disk is unmounted, it’ll be resumed when it next comes online.

There are also mount options you can use to adjust how low-impact the process is, or to put it off entirely until the next mount:

Mount options for ext4

          Do  not  initialize  any uninitialized inode table blocks in the
          background. This feature may be used  by  installation  CD's  so
          that  the  install  process can complete as quickly as possible;
          the inode table initialization process would  then  be  deferred
          until the next time the filesystem is mounted.

          The  lazy  itable init code will wait n times the number of mil‐
          liseconds it took to zero out the previous block  group's  inode
          table. This minimizes the impact on system performance while the
          filesystem's inode table is being initialized.

I can’t seem to find any documentation on the default value for init_itable, though. Perhaps it’s simply 1. Or 0?