OneFS 9.11 SmartSync
OneFS SmartSync provides mechanisms to replicate and copy data between Dell PowerScale systems and cloud object storage. By supporting file-to-object replication, SmartSync enables incremental data transfer and preserves metadata during the backup process. This allows for efficient, policy-driven protection and archiving of unstructured data across on-premises and cloud environments, leveraging the scalability of OneFS and cloud storage integration.
OneFS SmartSync is designed for scenarios that involve moving or protecting file data in Dell PowerScale environments. The main use cases include:
-
Backing up from PowerScale to cloud object storage (such as ECS and AWS S3), supporting file-to-object replication for large unstructured data sets while retaining metadata and access controls.
-
Disaster Recovery across sites or clouds by enabling data replication between PowerScale clusters (file-to-file) and from clusters to object storage (file-to-object), supporting multi-copy data distribution and reducing recovery times, especially in cascaded DR topologies.
-
Data migration and mobility across locations (data centers, clouds, regions) with both push and pull policies, allowing flexible scheduling and minimizing source resource impact during large-scale data movement.
-
Efficient periodic backup between PowerScale clusters using incremental replication (repeat-copy policies), so only changed data is transferred—this reduces bandwidth and operational overhead. Very similar to synciq.
-
Preservation of file-level metadata and access controls (NTFS/Posix ACLs, streams, xattrs as well as timestamps) during replication, suitable for environments with regulatory or application-specific requirements.
-
Full data copy capabilities for systems using CloudPools tiering, ensuring both hot and cold (tiered-to-cloud) data are included in replicas and backups.
-
Policy-driven automation for complex or repeated protection and migration workflows, including support for chaining policies and decoupling snapshot creation from data movement.
Key limitations: SmartSync does not support automatic failover/failback, post-replication write access at the target, or compliance mode clusters. Incremental copy (repeat-copy) is only available for file-to-file replication and file-to-object with the new dataset type “FILE_ON_OBJECT_BACKUP”. File-to-object with default dataset type “FILE” always performs a full dataset copy.
In this article we are going to explore the “Backup to Object” feature as intrtoducd with OneFS 9.11 in April 2025. We’ll cover from inital setup, exploring a number of basic operations, from backup over dataset browsing to restore.
Feature integration:
- cloudpools: SmartSync ALWAYS performs a deep copy for dataset creation of data tiered with CloudPools. Deep copy is a process that retrieves all data that is tiered to a cloud provider on the source cluster, allowing all the data to be replicated to the target cluster.
- SmartLock / WORM SmartSync is NOT Smartlock aware, as such it doesnt takes retention times along. Replication into an existing worm domain on a target cluster works however data will not be commited. Using auto commit to do so is potentially dangerous as data might still be comming in, might cause the smartsync job to fail.
- SmartQuotas SmartSync is not quota domain aware, however it can copy data into a quota domain on a target system but cannot carry quota information along. Also it doesn’t check the quota boundaries if quota domains differ between clusters and the target runs out of space the dm job will fail.
More information on SmartSync can be found:
Playing with Backup to object
Pre-requirements:
- one or better two PowerScale Simulators (Virtual Machines) running OneFS 9.11
- as a cloud provider we’re using the ECS Testdrive.
- Register and explore S3 on ObjectScale formerly known as ECS here: https://portal.ecstestdrive.com/
Setting it all up
- Create and install certificates => Create CA and identity per dm Peer!
mkdir /ifs/certs cd /ifs/certs openssl genrsa -out ca.key 4096 openssl req -x509 -new -nodes -key ca.key -sha256 -days 1825 -out ca.pem openssl genrsa -out identity.key 4096 openssl req -new -key identity.key -out identity.csr cat << EOF > identity.ext authorityKeyIdentifier=keyid,issuer\ basicConstraints=CA:FALSE\ keyUsage=digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment\ EOF openssl x509 -req -in identity.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out identity.crt -days 825 -sha256 -extfile identity.ext isi dm certificates ca create --certificate-path=/ifs/certs/ca.pem --name=test-CA isi dm certificates identity create --certificate-key-path=/ifs/certs/identity.key --certificate-path=/ifs/certs/identity.crt --name=Test-ID isi dm certificates ca list isi dm certificates identity list - Create a target account for ECS
isi dm accounts create ECS_S3 --auth-mode=CLOUD --access-id=130971723356320605@ecstestdrive.emc.com --secret-key=RzOvafHiaiTdrvIvZXEHGFaQ5I/X1ard8wgFx9je --name=ECS\ Test\ Drive --uri=https://object.ecstestdrive.com/<bucket_name> - Create a dataset creation policy
mkdir /ifs/dm_source cp some_data dm_source isi dm policies create create4cloud NORMAL true CREATION --creation-account-id=local --creation-base-path=/ifs/dm_source --creation-dataset-retention-period=72000 --creation-dataset-expiry-action=DELETE --start-time="2022-10-12 12:00:00" -
Create a copy policy (a linked policy to be precise)
isi dm policies create Backup2ECS --priority=NORMAL --enabled=true --policy-type=REPEAT_COPY --parent-exec-policy-id=1 --repeat-copy-source-base-path=[dm_source] --repeat-copy-base-base-account-id=local --repeat-copy-base-source-account-id=local --repeat-copy-base-new-tasks-account=local --repeat-copy-base-target-account-id=[Cloud Account ID] --repeat-copy-base-target-base-path=[target_path] --repeat-copy-base-target-dataset-type=FILE_ON_OBJECT_BACKUP --repeat-copy-base-dataset-retention-period=12000 --repeat-copy-base-dataset-reserve=10 --repeat-copy-base-policy-dataset-expiry-action=DELETE --start-time="2025-05-29 10:20:00" -
Create a clean up policy for local datasets aging
isi dm policies create local_cleanup NORMAL true EXPIRATION \ --expiration-account-id=local \ --start-time="2022-10-12 12:00:00" \ --recurrence="0 * * * *" - Run the job
isi dm policies modify --run-now=true 1 -
Make it reoccurring by updating the dataset creation policy uses Cron like expression:
<seconds> <minutes> <hours> <day-of-month> <month> <day-of-week> <year>
Run a job every hour:isi dm policies modify --recurrence="0 * * * *"For the ones who cannot memorize cron expressions: https://crontab.cronhub.io/
Working with Backups
To access and browse the backup sets from the S3 bucket we use the isi_dm tool. (Note: we dont have this papi fied today.)
isi_dm is basically the binary frontend of smartsync aka the datamover. The “isi dm” interacts with isi_dm. So technically we could do anything and everything dm with this tool from a clusters cli.
Now, how to access the backuped data with isi_dm?
Two options:
isi_dm object listto list the HEAD aka latest version of the backup dataset from an Object Account.isi_dm browsestarts an interactive cli that allows searching accessing the existing “backup sets” and to download individual files.- create a copy policy to restore data
Backup Listing
# isi_dm object list --account-id [ECS Account ID] --target-type=backup --basepath portableOneFS { "gfid" : "000C292EAE00437A3568322110D2EB9828801", "name" : "portableOneFS", "subdir" : [ { "gfid" : "000C292EAE00437A3568322110D2EB9828803", "name" : "More_data.txt", "size" : "10 bytes", "type" : "file" }, { "gfid" : "000C292EAE00437A3568322110D2EB9828802", "name" : "README.txt", "size" : "4 bytes", "type" : "file" }, { "gfid" : "000C292EAE00437A3568322110D2EB9828804", "name" : "data", "subdir" : [], "type" : "directory" } ], "type" : "directory" }The above basicaly allows to retrieve the data structure, however it only shows the latest version of the dataset.
Browsing and downloading
The interactive
isi_dm browsecli facility allows to switch between accounts and dataset. Access the backed up metadata and to download individual files or folders.Note:
helpgives a list of available commands.-
Connect to an object account and select a dataset
# isi_dm browse <no account>:<no dataset> $ list-accounts 000000000000000100000000000000000000000000000000 (backup2cloud) 000c292eae00437a3568322110d2eb982880000000000000 (DM Local Account) fd0000000000000000000000000000000000000000000000 (DM Loopback Account) <no account>:<no dataset> $ connect-account 000000000000000100000000000000000000000000000000 backup2cloud:<no dataset> $ backup2cloud:<no dataset> $ list-datasets 1 2025-05-29T10:40:13+0200 portableOneFS 2 2025-05-29T11:11:57+0200 portableOneFS 3 2025-05-29T11:26:10+0200 portableOneFS 4 2025-05-29T12:30:23+0200 portableOneFS 5 2025-05-29T14:00:29+0200 portableOneFS 6 2025-05-29T14:04:38+0200 portableOneFS 7 2025-05-29T14:08:55+0200 portableOneFS backup2cloud:<no dataset> $ connect-dataset 7 backup2cloud:7 <portableOneFS:> $ findrecursively lists ALL available files and folders from the dataset or a given subfolder but it support no additional filtering.cdandlsallows to interactively navigate the directory structure.-
```bash backup2cloud:7 <portableOneFS:> $ find ./data/ ./More_data.txt ./README.txt ./file_with_acl ./data/f73wg84xLnbANTr_dir/ ./data/oc6yhDj94xgDj942wLnGla5x_dir/ ./data/DATASET.INFO ./data/f73wg84xLnbANTr_dir/AiEPp_dir/ ./data/f73wg84xLnbANTr_dir/AiEPp_dir/jEkaANTWYZZZuf731vf_dir/ ./data/f73wg84xLnbANTr_dir/AiEPp_dir/qd6yhDOUXYuKnGla5xg8zMoc63_1_reg ./data/f73wg84xLnbANTr_dir/AiEPp_dir/jEkaANTWYZZZuf731vf_dir/qd6yhDj9421vKSrImbANTrdBiE_dir/ ./data/f73wg84xLnbANTr_dir/AiEPp_dir/jEkaANTWYZZZuf731vf_dir/qd6yhDj9421vKSrImbANTrdBiE_dir/qd6yMTWYuKSWte7yMTWtJmb521_dir/ ./data/f73wg84xLnbANTr_dir/AiEPp_dir/jEkaANTWYZZZuf731vf_dir/qd6yhDj9421vKSrImbANTrdBiE_dir/qd6yMTWYuKSWte7yMTWtJmb521_dir/Cj9zh8z_dir/ ./data/oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/ ./data/oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/sub_dir1/ cdandlsbackup2cloud:7 <portableOneFS:> $ ls More_data.txt [file] README.txt [file] data [dir] file_with_acl [file] backup2cloud:7 <portableOneFS:> $ cd data backup2cloud:7 <portableOneFS:data/> $ ls DATASET.INFO [file] f73wg84xLnbANTr_dir [dir] oc6yhDj94xgDj942wLnGla5x_dir [dir]
-
- download selected files or folders
downloadretrieves selected data, however it is not a “restore” facility as it does not retrieve the original security descriptors.backup2cloud:7 <portableOneFS:data/> $ download oc6yhDj94xgDj942wLnGla5x_dir /ifs/data/restore/testfolder oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/ -> /ifs/data/restore/testfolder/oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/ oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/sub_dir1/ -> /ifs/data/restore/testfolder/oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/sub_dir1/ oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/vf7y_1_reg -> /ifs/data/restore/testfolder/oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/vf7y_1_reg oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/sub_dir1/sub_dir2/ -> /ifs/data/restore/testfolder/oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/sub_dir1/sub_dir2/ oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/sub_dir1/sub_dir2/sub_dir3/ -> /ifs/data/restore/testfolder/oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/sub_dir1/sub_dir2/sub_dir3/ oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/sub_dir1/sub_dir2/afile -> /ifs/data/restore/testfolder/oc6yhDj94xgDj942wLnGla5x_dir/100v_dir/sub_dir1/sub_dir2/afile ... backup2cloud:3 <portableOneFS:> $ download README.txt /ifs/data/README.original.txt backup2cloud:3 <portableOneFS:> $
Restore from Cloud
The data can be restored from the same cluster or a different cluster pointing to the same cloud account.
Data can be pulled in via a COPY policy. Note a copy policy can only be run once! The target directory must be empty, if it doesn’t exist the process does create it.
Full restore from a given dataset:
isi dm policies create RestoreFromECS \ --priority=NORMAL \ --enabled=true \ --policy-type=COPY \ --copy-dataset-id=6 \ --copy-source-base-path=portableOneFS \ --copy-create-dataset-on-target=false \ --copy-base-source-account-id=000000000000000100000000000000000000000000000000 \ --copy-base-target-account-id=local \ --copy-base-target-base-path=/ifs/data/restore \ --copy-base-target-dataset-type=FILE \ --start-time="2025-05-29 10:20:00"Single File Restore
with the option
copy-source-subpathsit’s possible to specify a single file or folder in the source dataset to be retrieved. Contrary to thedownloadoption inisi_dm browsethis restores the file including the original security descriptor.isi dm policies create SingleFileRestore \ --priority=NORMAL \ --enabled=true \ --policy-type=COPY \ --copy-dataset-id=7 \ --copy-source-base-path=portableOneFS \ --copy-source-subpaths=file_with_acl \ --copy-create-dataset-on-target=false \ --copy-base-source-account-id=000000000000000100000000000000000000000000000000 \ --copy-base-target-account-id=local \ --copy-base-target-base-path=/ifs/data/restore \ --copy-base-target-dataset-type=FILE \ --start-time="2025-05-29 10:20:00"PowerScale to PowerScale DM Communications
Create a
DMaccount of the second cluster allows the following:- create PULL policies, where all the primary work is done on the local cluster
- start PowerScale to PowerScale replications
- Pull data in on demand using a one-off COPY policy
- Pull and PUSH to Cloud
- Browse and download from remote datasets
Usecases:
- Cyber: Golden Copy to a Vault: no administration on the source system, only visible artefact are the datasets created.
Local cluster does not manage the replication and cannot compromise the replicated data. - Data distribution / collaboration:
- multiple clusters connect to each other
- datasets created locally
- each cluster can pull or download data on demand
- critical data can be distributed to all systems.
Access from or with a 2nd Cluster
- Install a certificate with the same CA as above
- create the same Could account
- start browsing
sec-1# isi_dm browse <no account>:<no dataset> $ list-accounts 000000000000000108000000000000000000000000000000 (ECS Test Drive) 000c29f6a733f46f3868ad03c04a1aee2790000000000000 (DM Local Account) fd0000000000000000000000000000000000000000000000 (DM Loopback Account) <no account>:<no dataset> $ connect-account 000000000000000108000000000000000000000000000000 ECS Test Drive:<no dataset> $ list-datasets 1 2025-05-29T10:40:13+0200 portableOneFS 2 2025-05-29T11:11:57+0200 portableOneFS 3 2025-05-29T11:26:10+0200 portableOneFS 4 2025-05-29T12:30:23+0200 portableOneFS 5 2025-05-29T14:00:29+0200 portableOneFS 6 2025-05-29T14:04:38+0200 portableOneFS 7 2025-05-29T14:08:55+0200 portableOneFS 8 2025-05-29T16:00:23+0200 portableOneFS ECS Test Drive:<no dataset> $ connect-dataset 8 ECS Test Drive:8 <portableOneFS:> $ ls More_data.txt [file] README.txt [file] data [dir] file_with_acl [file] ECS Test Drive:8 <portableOneFS:> $ download README.txt /ifs/data/README.restored ECS Test Drive:8 <portableOneFS:> $ ECS Test Drive:8 <portableOneFS:> $ ECS Test Drive:8 <portableOneFS:> $ ECS Test Drive:8 <portableOneFS:> $ find ./data/ ./More_data.txt ./README.txt ./file_with_acl ./data/f73wg84xLnbANTr_dir/ ./data/oc6yhDj94xgDj942wLnGla5x_dir/ ./data/DATASET.INFO ./data/f73wg84xLnbANTr_dir/AiEPp_dir/ ./data/f73wg84xLnbANTr_dir/AiEPp_dir/jEkaANTWYZZZuf731vf_dir/ ./data/f73wg84xLnbANTr_dir/AiEPp_dir/qd6yhDOUXYuKnGla5xg8zMoc63_1_reg ./data/f73wg84xLnbANTr_dir/AiEPp_dir/jEkaANTWYZZZuf731vf_dir/qd6yhDj9421vKSrImbANTrdBiE_dir/ ...- or restore as if it was the original cluster…
Change--copy-base-source-account-idto match the potentially different account id of the target clusters dm configuration.
PULL to Object from someone elses Cluster…
-
create a dataset creation policy for the remote cluster
isi dm policies create on_sec NORMAL true CREATION --creation-account-id=000000000000000104000000000000000000000000000000 --creation-base-path=/ifs/data/Documents --creation-dataset-retention-period=72000 --creation-dataset-expiry-action=DELETE --start-time="2022-10-12 12:00:00" - create PULL copy policy
isi dm policies create Backup_SEC_2_ECS\ --priority=NORMAL\ --enabled=true\ --policy-type=REPEAT_COPY\ --parent-exec-policy-id=8193\ --repeat-copy-source-base-path=/ifs/data/Documents\ --repeat-copy-base-base-account-id=local\ --repeat-copy-base-source-account-id=000000000000000104000000000000000000000000000000\ --repeat-copy-base-new-tasks-account=local\ --repeat-copy-base-target-account-id=000000000000000100000000000000000000000000000000\ --repeat-copy-base-target-base-path=fromSEC\ --repeat-copy-base-target-dataset-type=FILE_ON_OBJECT_BACKUP\ --repeat-copy-base-dataset-retention-period=12000\ --repeat-copy-base-dataset-reserve=10\ --repeat-copy-base-policy-dataset-expiry-action=DELETE --start-time="2025-05-29 10:20:00" - Run the dataset creation policy manually
--run-now=trueor schedule it--recurrence.
APPENDIX
Review metadata from a given file.
backup2cloud:3 <portableOneFS:> $ stat README.txt
gstat:{
gfile_id = {guid={dmg_guid=000c292eae00437a3568322110d2eb982880} local_unid=2}
mode = 000000100600
nlink = 1
uid = 0
gid = 0
rdev = -1
atime = 2025-05-29T10:51:19+0200
mtime = 2025-05-29T10:51:19+0200
ctime = 2025-05-29T10:51:19+0200
birthtime = 2025-05-29T10:51:19+0200
size = 1037
flags = file
}
sd: {
sd_info = DACL_SECURITY_INFORMATION
sd_data = { control=32772 owner=<NULL> group=<NULL> dacl={ ace_count=3 aces=[ { ace_type=0x00 ace_flags=0x00 ace_rights=0x00120089 ace_trustee=type:DM_PERSONA_UID size:8 uid:13 }, { ace_type=0x00 ace_flags=0x00 ace_rights=0x0016019f ace_trustee=type:DM_PERSONA_UID size:8 uid:0 }, { ace_type=0x00 ace_flags=0x00 ace_rights=0x00120080 ace_trustee=type:DM_PERSONA_GID size:8 gid:0 } ] } sacl=<NULL> }
}
extattr: {
extattr_num = 0
extattr_names =
extattr_values = []
}
sysattr: {
sysattr_num = 3
sysattr_values = [ { dmsa_name=ST_FLAG dmsa_value=00000000 dmsa_value_size=4 dmsa_fs_type=DM_FS_TYPE_ONEFS dmsa_fs_version=0xffffffffffffffff }, { dmsa_name=HDFSRAW dmsa_value=0000000000000000080000000000000080000000000000004000100000000000 dmsa_value_size=32 dmsa_fs_type=DM_FS_TYPE_ONEFS dmsa_fs_version=0xffffffffffffffff }, { dmsa_name=MD5_VAL dmsa_value=00 dmsa_value_size=1 dmsa_fs_type=DM_FS_TYPE_ONEFS dmsa_fs_version=0xffffffffffffffff } ]
}
Parent gfid count #1:
Parent info count #1:
Parent gfid #0: {guid={dmg_guid=000c292eae00437a3568322110d2eb982880} local_unid=1}
{digest=b01eeed0ec6fc4f4799e44c7b4084e64}
