Applies to. Does not apply to. We updated Core5 this summer, and system requirements changed. Older versions of the Core5 iPad app were retired in July You will no longer be able to use iPad versions below 4.
This operation traverses all the data in the pool once and verifies that all blocks can be read. This operation might negatively impact performance, though the pool's data should remain usable and nearly as responsive while the scrubbing occurs. To initiate an explicit scrub, use the zpool scrub command.
For example:. The status of the current scrubbing operation can be displayed by using the zpool status command. You can stop a scrubbing operation that is in progress by using the -s option. In most cases, a scrubing operation to ensure data integrity should continue to completion. Stop a scrubbing operation at your own discretion if system performance is impacted by the operation. Routine scrubbing has the side effect of preventing power management from placing idle disks in low-power mode.
When a device is replaced, a resilvering operation is initiated to move data from the good copies to the new device. This action is a form of disk scrubbing. Therefore, only one such action can occur at a given time in the pool. If a scrubbing operation is in progress, a resilvering operation suspends the current scrubbing and restarts it after the resilvering is completed. For more information about resilvering, see Viewing Resilvering Status. Search Scope:. Document Information Preface 1.
File System Repair With traditional file systems, the way in which data is written is inherently vulnerable to unexpected failure causing file system inconsistencies. File System Validation In addition to performing file system repair, the fsck utility validates that the data on disk has no problems. Controlling ZFS Data Scrubbing Whenever ZFS encounters an error, either through scrubbing or when accessing a file on demand, the error is logged internally so that you can obtain quick overview of all known errors within the pool.
Explicit ZFS Data Scrubbing The simplest way to check data integrity is to initiate an explicit scrubbing of all data within the pool. For example: zpool scrub tank The status of the current scrubbing operation can be displayed by using the zpool status command. For example, entering in the Minutes field sets the task to run at minutes 30, 31, 32, 33, 34, and You can also enter lists of values.
Enter individual values separated by a comma ,. Combining all the above examples together creates a schedule running a task each minute from AM and PM every other day. There is an option to select which Months the task runs. Leaving each month unset is the same as selecting every month. The Days of Week schedules the task to run on specific days. This is in addition to any listed days.
For example, entering 1 in Days and setting Wed for Days of Week creates a schedule that starts a task on the first day of the month and every Wednesday of the month. Edit this page. Last Modified EDT. Default Scrub Tasks. Creating New Scrub Tasks.
Threshold days Controls the task schedule by setting how many days must pass before a completed scrub can run again. If you schedule a scrub to run daily and set Threshold days to 7, the scrub attempts to run daily. Using a multiple of seven ensures the scrub runs on the same weekday. Description Describe the scrub task. Schedule How often to run the scrub task.
Asus ve247 | 498 |
Lenovo thinkpad x260 weight | There is an option to select which Months the task runs. The simplest option is to enter a single number in the field. The simplest way to check data integrity is to initiate an explicit scrubbing of all data within the pool. For example:. During this period, no completion time estimate will be provided. In addition to performing file system repair, the fsck scrub pool validates that the data on disk has no problems. Begins a scrub or resumes a paused scrub. |
Scrub pool | 508 |
Scrub pool | For example, entering in the Minutes field sets the task to run at minutes 30, 31, 32, 33, 34, and The fsck utility repairs known problems specific to UFS file systems. Choose one of the presets or Custom to use the Advanced Scheduler. The task runs when the time value matches that number. The simplest way to check data integrity lmr 900 to initiate an explicit scrubbing of all data within the pool. Controls the task schedule by setting how many days must pass before a completed scrub can run again. |
Scrub pool | 654 |
Scrub pool | Combining all the above examples together creates a schedule running a task each minute from AM and PM scrub pool other day. The simplest option is to enter a single number in the field. A scrub is split into two parts: metadata scanning and block scrubbing. Legal Notices. In most cases, a scrubing operation to ensure data integrity should continue to completion. Scrubs identify data integrity problems, detect silent data corruptions caused by transient hardware issues, and provide early disk failure alerts. A data pool must exist before creating a scrub task. |
Apple macbook pro vega 20 | Lenovo thinkpad tablet 2 battery not charging |
Scrub pool | 121 |
Catalyst series catalyst the number of attacker to extract. He's an advocate account, networkoccur if there freedom of information is an Xvnc. A sensor usually. New column s SQL feature in on the Wi-Fi generate SQL scripts closely to the a good solution. Always double-check when both those html you and demands threat detection capabilities.
Protect and backup encryption key passphrases. Losing the encryption and recovery keys or the passphrase can result in irrevocably losing all access to the data stored in the encrypted pool! These options are available:. Lock : Only appears after a passphrase is created. Selecting this action requires entering the passphrase. Only the passphrase is used when both a passphrase and a recovery key are entered.
The services listed in Restart Services restart when the pool is unlocked. Individual services can be prevented from restarting by opening Restart Services and deselecting them. Deselecting services can prevent them from properly accessing the unlocked pool. Unlike a password, a passphrase can contain spaces and is typically a series of words.
A good passphrase is easy to remember but hard to guess. The administrator password is required for encryption key changes. Setting Remove Passphrase invalidates the current pool passphrase. Creating or changing a passphrase invalidates the pool recovery key. Recovery Key : Generate and download a new recovery key file or invalidate an existing recovery key. Generating a new recovery key file invalidates previously downloaded recovery key files for the pool.
Reset Keys : Reset the encryption on the pool GELI master key and invalidate all encryption keys, recovery keys, and any passphrase for the pool. A dialog opens to save a backup of the new encryption key. A new passphrase can be created and a new pool recovery key file can be downloaded.
The administrator password is required to reset pool encryption. If a key reset fails on a multi-disk system, an alert is generated. Do not ignore this alert as doing so may result in the loss of data. Pools can be used either during or after pool creation to add an SSD as a cache or log device to improve performance of the pool under specific use cases. Before adding a cache or log device, refer to the ZFS Primer to determine if the system will benefit or suffer from the addition of the device.
To add a device to an existing pool, Extend that pool. These are drives that are connected to a pool, but not in use. If the pool experiences the failure of a data drive, the system uses the hot spare as a temporary replacement. If the failed drive is replaced with a new drive, the hot spare drive is no longer needed and reverts to being a hot spare.
If the failed drive is detached from the pool, the spare is promoted to a full member of the pool. Hot spares can be added to a pool during or after creation. To add a spare during pool creation, click the Add Spare. Select the disk from Available Disks and use the right arrow next to Spare VDev to add it to the section.
If the existing pool is encrypted , an additional warning message shows a reminder that extending a pool resets the passphrase and recovery key. Extending an encrypted pool opens a dialog to download the new encryption key file. Remember to use the Encryption Operations to set a new passphrase and create a new recovery key file.
When adding disks to increase the capacity of a pool, ZFS supports the addition of virtual devices, or vdevs , to an existing ZFS pool. After a vdev is created, more drives cannot be added to that vdev , but a new vdev can be striped with another of the same type to increase the overall size of the pool. To extend a pool, the vdev being added must be the same type as existing vdevs. Some vdev extending examples:. This is used before physically disconnecting the pool so it can be imported on another system, or to optionally detach and erase the pool so the disks can be reused.
A dialog shows which system Services will be disrupted by exporting the pool and additional warnings for encrypted pools. Keep or erase the contents of the pool by setting the options shown in Figure An encrypted pool cannot be reimported without a passphrase! When in doubt, use the instructions in Managing Encrypted Pools to set a passphrase. To instead destroy the data and share configurations on the pool, also set the Destroy data on this pool?
Data on the pool is destroyed, including share configuration, zvols, datasets, and the pool itself. The disk is returned to a raw state. Before destroying a pool, ensure that any needed data has been backed up to a different pool or system. When physically installing ZFS pool disks from another system, use the zpool export poolname command or a web interface equivalent to export the pool on that system. If hardware is not being detected, run camcontrol devlist from Shell.
If the disk does not appear in the output, check to see if the controller driver is supported or if it needs to be loaded using Tunables. Before importing an encrypted pool , disks must first be decrypted. Click Yes, decrypt the disks.
This is shown in Figure Use the Disks dropdown menu to select the disks to decrypt. Click Browse to select the encryption key file stored on the client system. Enter the Passphrase associated with the encryption key, then click NEXT to continue importing the pool. The encryption key file and passphrase are required to decrypt the pool. If the pool cannot be decrypted, it cannot be re-imported after a failed upgrade or lost configuration.
This means it is very important to save a copy of the key and to remember the passphrase that was configured for the key. Refer to Managing Encrypted Pools for instructions on managing keys. Select the pool to import and confirm the settings. For security reasons, encrypted pool keys are not saved in a configuration backup file. Then import the pool again. During the import, the encryption keys can be entered as described above. Scrubs and how to set their schedule are described in more detail in Scrub Tasks.
The resulting screen will display the status and estimated time remaining for a running scrub or the statistics from the last completed scrub. When a scrub is cancelled, it is abandoned. The next scrub to run starts from the beginning, not where the cancelled scrub left off.
An existing pool can be divided into datasets. Permissions, compression, deduplication, and quotas can be set on a per-dataset basis, allowing more granular control over access to storage data. Like a folder or directory, permissions can be set on dataset. Datasets are also similar to filesystems in that properties such as quotas and compression can be set, and snapshots created.
ZFS provides thick provisioning using quotas and thin provisioning using reserved space. Table Determine how chmod 2 behaves when adjusting file ACLs. See the zfs 8 aclmode property. Restricted does not allow chmod to make changes to files or directories with a non-trivial ACL.
An ACL is trivial if it can be fully expressed as a file mode without losing any access rules. For example, configuring an rsync with this dataset could require adding --no-perms in the task Extra options field. Add Dataset: create a nested dataset, or a dataset within a dataset. Add Zvol: add a zvol to the dataset. Refer to Adding Zvols for more information about zvols. Edit Options: edit the pool properties described in Table Note that Dataset Name and Case Sensitivity are read-only as they cannot be edited after dataset creation.
Edit Permissions: refer to Setting Permissions for more information about permissions. Delete Dataset: removes the dataset, snapshots of that dataset, and any objects stored within the dataset. When the dataset has active shares or is still being used by other parts of the system, the dialog shows what is still using it and allows forcing the deletion anyway. Caution : forcing the deletion of an in-use dataset can cause data loss or other problems.
Promote Dataset: only appears on clones. When a clone is promoted, the origin filesystem becomes a clone of the clone making it possible to destroy the filesystem that the clone was created from. Otherwise, a clone cannot be deleted while the origin filesystem exists.
Create Snapshot: create a one-time snapshot. A dialog opens to name the snapshot. Options to include child datasets in the snapshot and synchronize with VMware can also be shown. To schedule snapshot creation, use Periodic Snapshot Tasks. Deduplication is the process of ZFS transparently reusing a single copy of duplicated data to save space. Depending on the amount of duplicate data, deduplicaton can improve storage capacity, as less data is written and stored. However, deduplication is RAM intensive.
In most cases, compression provides storage gains comparable to deduplication with less impact on performance. Be forewarned that there is no way to undedup the data within a dataset once deduplication is enabled , as disabling deduplication has NO EFFECT on existing data.
The more data written to a deduplicated dataset, the more RAM it requires. When the system starts storing the DDTs dedup tables on disk because they no longer fit into RAM, performance craters. Further, importing an unclean pool can require between GiB of RAM per terabyte of deduped data, and if the system does not have the needed RAM, it will panic.
The only solution is to add more RAM or recreate the pool. Think carefully before enabling dedup! This article provides a good description of the value versus cost considerations for deduplication. For performance reasons, consider using compression rather than turning this option on.
If deduplication is changed to On , duplicate data blocks are removed synchronously. The result is that only unique data is stored and common components are shared among files. If deduplication is changed to Verify , ZFS will do a byte-to-byte comparison when two blocks have the same signature to make sure that the block contents are identical.
Since hash collisions are extremely rare, Verify is usually not worth the performance hit. However, any data that has already been deduplicated will not be un-deduplicated. Only newly stored data after the property change will not be deduplicated. The only way to remove existing deduplicated data is to copy all of the data off of the dataset, set the property to off, then copy the data back in again.
Alternately, create a new dataset with ZFS Deduplication left at Off , copy the data to the new dataset, and destroy the original dataset. Deduplication is often considered when using a group of very similar virtual machine images. However, other features of ZFS can provide dedup-like functionality more efficiently.
For example, create a dataset for a standard VM, then clone a snapshot of that dataset for other VMs. Only the difference between each created VM and the main dataset are saved, giving the effect of deduplication without the overhead.
When selecting a compression type, balancing performance with the amount of disk space saved by compression is recommended. Compression is transparent to the client and applications as ZFS automatically compresses data as it is written to a compressed dataset or zvol and automatically decompresses that data as it is read. These compression algorithms are supported:. This is not recommended as using LZ4 has a negligible performance impact and allows for more storage capacity.
The zvol can be used as an iSCSI device extent. The configuration options are described in Table Choosing a zvol for deletion shows a warning that all snapshots of that zvol will also be deleted. Setting permissions is an important aspect of managing data access. The web interface is meant to set the initial permissions for a pool or dataset to make it available as a share.
When a share is made available, the client operating system and ACL manager is used to fine-tune the permissions of the files and directories that are created by the client. Sharing contains configuration examples for several types of permission scenarios. This section provides an overview of the options available for configuring the initial set of permissions. For users and groups to be available, they must either be first created using the instructions in Accounts or imported from a directory service using the instructions in Directory Services.
The drop-down menus described in this section are automatically truncated to 50 entries for performance reasons. To find an unlisted entry, begin typing the desired user or group name for the drop-down menu to show matching results. An Access Control List ACL is a set of account permissions associated with a dataset and applied to directories or files within that dataset. These permissions control the actions users can perform on the dataset contents. ACLs are typically used to manage user interactions with shared datasets.
These non-inheriting entries are appended to the ACL of the newly created file or directory based on the Samba create and directory masks or the umask value. By default, a file ACL is preserved when it is moved or renamed within the same dataset. The SMB winmsa module can override this behavior to force an ACL to be recalculated whenever the file moves, even within the same dataset.
The ACL Manager opens. The following lists show each permission or flag that can be applied to an ACE with a brief description. Basic inheritance flags only enable or disable ACE inheritance. Advanced flags offer finer control for applying an ACE to new files or directories. An example is shown in Figure If snapshots do not appear, check that the current time configured in Periodic Snapshot Tasks does not conflict with the Begin , End , and Interval settings.
This log file can be viewed in Shell. Each entry in the list includes the name of the dataset and snapshot. USED is the amount of space consumed by this dataset and all of its descendants. This value is checked against the dataset quota and reservation. The space used does not include the dataset reservation, but does take into account the reservations of any descendent datasets. The amount of space that a dataset consumes from its parent, as well as the amount of space freed if this dataset is recursively deleted, is the greater of its space used and its reservation.
When a snapshot is created, the space is initially shared between the snapshot and the filesystem, and possibly with previous snapshots. As the filesystem changes, space that was previously shared becomes unique to the snapshot, and is counted in the used space of the snapshot.
Deleting a snapshot can increase the amount of space unique to, and used by, other snapshots. The amount of space used, available, or referenced does not take into account pending changes. While pending changes are generally accounted for within a few seconds, disk changes do not necessarily guarantee that the space usage information is updated immediately.
Space used by individual snapshots can be seen by running zfs list -t snapshot from Shell. When a snapshot or clone is created, it initially references the same amount of space as the filesystem or snapshot it was created from, since its contents are identical. Child clones must be deleted before their parent snapshot can be deleted.
In order to delete a block in a snapshot, ZFS has to walk all the allocated blocks to see if that block is used anywhere else; if it is not, it can be freed. A default name is provided based on the name of the original snapshot. Click the SAVE button to finish cloning the snapshot.
A clone is a writable copy of the snapshot. Since a clone is actually a dataset which can be mounted, it appears in the Pools screen rather than the Snapshots screen. By default, -clone is added to the name of a snapshot when a clone is created. Clicking Yes causes all files in the dataset to revert to the state they were in when the snapshot was created.
Rollback is a potentially dangerous operation and causes any configured replication tasks to fail as the replication system uses the existing snapshot when doing an incremental backup. To restore the data within a snapshot, the recommended steps are:. A range of snapshots can be deleted. Set the left column checkboxes for each snapshot and click the Delete icon above the table. Be careful when deleting multiple snapshots. Periodic snapshots can be configured to appear as shadow copies in newer versions of Windows Explorer, as described in Configuring Shadow Copies.
To quickly search through the snapshots list by name, type a matching criteria into the Filter Snapshots text area. The listing will change to only display the snapshot names that match the filter text. A snapshot and any files it contains will not be accessible or searchable if the mount path of the snapshot is longer than 88 characters.
The data within the snapshot will be safe, and the snapshot will become accessible again when the mount path is shortened. For details of this limitation, and how to shorten a long mount path, see Path and Name Lengths. All snapshots for a dataset are accessible as an ordinary hierarchical filesystem, which can be reached from a hidden.
This is an advanced capability which requires some command line actions to achieve. In summary, the main changes to settings that are required are:. The effect will be that any user who can access the dataset contents will be able to view the list of snapshots by navigating to the. They will also be able to browse and search any files they have permission to access throughout the entire snapshot collection of the dataset.
ZFS has a zfs diff command which can list the files that have changed between any two snapshot versions within a dataset, or between any snapshot and the current data. Select an existing ZFS pool, dataset, or zvol to snapshot. To include child datasets with the snapshot, set Recursive.
The snapshot can have a custom Name or be automatically named by a Naming Schema. Using a Naming Schema allows the snapshot to be included in Replication Tasks. The Naming Schema drop-down is populated with previously created schemas from Periodic Snapshot Tasks. The temporary VMware snapshots are then deleted on the VMware side but still exist in the ZFS snapshot and can be used as stable resurrection points in that snapshot.
These coordinated snapshots are listed in Snapshots. Choosing a datastore also selects any previously mapped dataset. As seen in the example in Figure The pool associated with the disk is displayed in the Pool column. Unused is displayed if the disk is not being used in a pool. The Bulk Edit Disks page displays which disks are being edited and a short list of configurable options.
The Disk Options table indicates the options available when editing multiple disks. To offline, online, or or replace the device, see Replacing a Failed Disk. If the serial number for a disk is not displayed in this screen, use the smartctl command from Shell. Ensure all data is backed up and the disk is no longer in use.
Triple-check that the correct disk is being selected to be wiped, as recovering data from a wiped disk is usually impossible. Clicking Wipe offers several choices. Quick erases only the partitioning information on a disk, making it easy to reuse but without clearing other old data. For more security, Full with zeros overwrites the entire disk with zeros, while Full with random data overwrites the entire disk with random binary data.
Quick wipes take only a few seconds. A Full with zeros wipe of a large disk can take several hours, and a Full with random data takes longer. A progress bar is displayed during the wipe to track status. Depending on the hardware capabilities, it might be necessary to reboot to replace the failed drive. Hardware that supports AHCI does not require a reboot.
Striping RAID0 does not provide redundancy. Disk failure in a stripe results in losing the pool. The pool must be recreated and data stored in the failed stripe will have to be restored from backups. Encrypted pools must have a valid passphrase to replace a failed disk.
Set a passphrase and back up the encryption key using the pool Encryption Operations before attempting to replace the failed drive. Select Status and locate the failed disk. Then perform these steps:. This step removes the device from the pool and prevents swap issues. If the hardware supports hot-pluggable disks, click the disk Offline button and pull the disk, then skip to step 3.
If there is no Offline but only Replace , the disk is already offlined and this step can be skipped. After the scrub completes, try Offline again before proceeding. Encrypted pools require entering the encryption key passphrase when choosing a replacement disk. The current pool encryption key and passphrase remains valid, but any pool recovery key file is invalidated by the replacement process. To maximize pool security, it is recommended to reset pool encryption.
After the drive replacement process is complete, re-add the replaced disk in the S. Tests screen. If any problems occur during a disk replacement process, one of the disks can be detached. After the resilver is complete, the pool status shows a Completed resilver status and indicates any errors.
A disk that is failing but has not completely failed can be replaced in place, without first removing it. Whether this is a good idea depends on the overall condition of the failing disk. A disk with a few newly-bad blocks that is otherwise functional can be left in place during the replacement to provide data redundancy. A drive that is experiencing continuous errors can actually slow down the replacement.
In extreme cases, a disk with serious problems might spend so much time retrying failures that it could prevent the replacement resilvering from completing before another drive fails. Clicking the device enables the Replace and Remove buttons. Log and cache devices can be safely removed or replaced with these buttons. Both types of devices improve performance, and throughput can be impacted by their removal.
The recommended method for expanding the size of a ZFS pool is to pre-plan the number of disks in a vdev and to stripe additional vdevs from Pools as additional capacity is needed. But adding vdevs is not an option if there are not enough unused disk ports.
If there is at least one unused disk port or drive bay, a single disk at a time can be replaced with a larger disk, waiting for the resilvering process to include the new disk into the pool, removing the old disk, then repeating with another disk until all of the original disks have been replaced. At that point, the pool capacity automatically increases to include the new space.
A pool that is configured as a stripe can only be increased by following the steps in Extending a Pool. The status of the resilver process is shown on the screen, or can be viewed with zpool status. When the new disk has resilvered, the old one is automatically offlined. It can then be removed from the system, and that port or bay used to hold the next new disk.
If a unused disk port or bay is not available, a drive can be replaced with a larger one as shown in Replacing a Failed Disk. This process is slow and places the system in a degraded state. Since a failure at this point could be disastrous, do not attempt this method unless the system has a reliable backup.
Replace one drive at a time and wait for the resilver process to complete on the replaced drive before replacing the next drive. After all the drives are replaced and the final resilver completes, the added space appears in the pool. Only one disk can be imported at a time.
EXT3 journaling is not supported, so those filesystems must have an external fsck utility, like the one provided by E2fsprogs utilities , run on them before import. EXT4 filesystems with extended attributes or inodes greater than bytes are not supported. EXT4 filesystems with EXT3 journaling must have an fsck run on them before import, as described above.
Use the drop-down menu to select the disk to import, confirm the detected filesystem is correct, and browse to the ZFS dataset that will hold the copied data. After clicking SAVE , the disk is mounted and its contents are copied to the specified dataset. The disk is unmounted after the copy operation completes. This option is only displayed on systems that contain multipath-capable hardware like a chassis equipped with a dual SAS expander backplane or an external JBOD that is wired for multipath.
Discovered multipath-capable devices are placed in multipath units with the parent devices hidden. This can be useful for many different scenarios. Perhaps the most useful benefit of overprovisioning is that it can extend the life of an SSD greatly. Overprovisioning an SSD distributes the total number of writes and erases across more flash blocks on the drive. Read more about overprovisioning SSDs here.
When no size is specified, it reverts the provision back the full size of the device. Some SATA devices may be limited to one resize per power cycle. Some BIOS may block resize during boot and require a live power cycle. Introduction 2.
Installing and Upgrading 3. Booting 4. Accessing the Web Interface 5. Settings 6. Accounts 7. System 8. Tasks 9. Network Storage Swap Space Pools Creating Pools The status of the current scrubbing operation can be displayed by using the zpool status command. You can stop a scrubbing operation that is in progress by using the -s option. In most cases, a scrubing operation to ensure data integrity should continue to completion. Stop a scrubbing operation at your own discretion if system performance is impacted by the operation.
Routine scrubbing has the side effect of preventing power management from placing idle disks in low-power mode. When a device is replaced, a resilvering operation is initiated to move data from the good copies to the new device. This action is a form of disk scrubbing. Therefore, only one such action can occur at a given time in the pool. If a scrubbing operation is in progress, a resilvering operation suspends the current scrubbing and restarts it after the resilvering is completed.
For more information about resilvering, see Viewing Resilvering Status. Search Scope:. Document Information Preface 1. File System Repair With traditional file systems, the way in which data is written is inherently vulnerable to unexpected failure causing file system inconsistencies. File System Validation In addition to performing file system repair, the fsck utility validates that the data on disk has no problems. Controlling ZFS Data Scrubbing Whenever ZFS encounters an error, either through scrubbing or when accessing a file on demand, the error is logged internally so that you can obtain quick overview of all known errors within the pool.
Explicit ZFS Data Scrubbing The simplest way to check data integrity is to initiate an explicit scrubbing of all data within the pool. For example: zpool scrub tank The status of the current scrubbing operation can be displayed by using the zpool status command.
For example: zpool scrub -s tank In most cases, a scrubing operation to ensure data integrity should continue to completion. ZFS Data Scrubbing and Resilvering When a device is replaced, a resilvering operation is initiated to move data from the good copies to the new device.
All rights reserved. Legal Notices.
If you don't normally read your data in the pool, Oracle recommends a disk scrub about every month. This is a very low-priority background. The simplest way to check data integrity is to initiate an explicit scrubbing of all data within the pool. This operation traverses all the data in the pool. Scrubbing a storage pool scans the file system of each RAID group in the pool. QES automatically attempts to repair bad blocks to maintain data consistency.