Installing firmware on fresh disks (2Big 2)

From NAS-Central Lacie Wiki
Jump to: navigation, search

Connecting the disk

You'll need two disks. This method doesn't work with only one disk. You can prepare the disks in succession, so you don't need to be able to connect both disks at the same time.

Connect the new disks to a Linux PC. (A windows PC booted from a Linux Live CD or -usb stick is fine). You can use an USB-SATA converter, or connect the disk on an in- or extern SATA port.
You'll need mdadm and xfstools. mdadm is not installed by default on an Ubuntu system, so you'll have to install it:

sudo apt-get update
sudo apt-get install mdadm

Puppy Linux doesn't support xfs

Find device name

Find the device name of the disk:

cat /proc/partitions

I'll assume the disk is sdb for the rest of the story.

Collect files

Download the files here

Become root

You'll need to have root rights to do the next steps. In Ubuntu or Knoppix you can get these by executing

sudo su

In most other flavors you just execute

su


MBR and label

Write mbr+label.gz to disk:

gzip -dc /full/path/to/mbr+label.gz | dd of=/dev/sdb

(when using sudo for root things, this should be

gzip -dc /full/path/to/mbr+label.gz | sudo dd of=/dev/sdb

)

Create partitions

Use fdisk to generate this partition table:

   Device Boot      Start         End      Blocks  Id System
/dev/sdb1               1         250     2008093+  5 Extended
/dev/sdb2             251      121601   974751907+ fd Linux raid autodetect
/dev/sdb5               1          32      256977  fd Linux raid autodetect
/dev/sdb6              33          33        8001  83 Linux
/dev/sdb7              34          34        8001  fd Linux raid autodetect
/dev/sdb8              35         140      851413+ fd Linux raid autodetect
/dev/sdb9             141         249      875511  fd Linux raid autodetect
/dev/sdb10            250         250        8001  83 Linux

/dev/sdb2 is the data partition, it uses all remaining space. When using different disks, make sure the data partitions have an equal size.
After writing mbr+label.gz to the disk, /dev/sdb1 and /dev/sdb5 are already created.
/dev/sdb2 is a primary partition, /dev/sdb1 is an extended partition, and all other partitions are logical.

fdisk is started by:

fdisk /dev/sdb

and use 'm' to get further help.
For more modern fdisk versions (2012+?) you'll need to use the flags -cu:

fdisk -cu /dev/sdb

This switches on 'dos compatibility' and 'cylinder units'. On older fdisk versions it does the reverse. (Or in other words, the default mode is changed somewhere in 2011/2012)

preparing raid arrays

/dev/sdb5 (swap)

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb5 missing --metadata=0.90
mkswap -f /dev/md0
mdadm --stop /dev/md0

/dev/sdb7 (initfs)

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb7 missing --metadata=0.90
mke2fs -j /dev/md0
mkdir /tmp/md0
mount /dev/md0 /tmp/md0
cd /tmp/md0
tar xzf /full/path/to/sda7.tgz
cd ..
umount /tmp/md0
mdadm --stop /dev/md0

/dev/sdb8 (ro layer rootfs)

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb8 missing --metadata=0.90
mke2fs -j /dev/md0
mount /dev/md0 /tmp/md0
cd /tmp/md0
tar xzf /full/path/to/sda8.tgz
cd ..
umount /tmp/md0
mdadm --stop /dev/md0

/dev/sdb9 (rw layer rootfs)

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb9 missing --metadata=0.90
mke2fs -j /dev/md0
mdadm --stop /dev/md0

/dev/sdb2 (data partition)

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb2 missing --metadata=0.90
mkfs.xfs /dev/md0
mdadm --stop /dev/md0

/dev/sdb6 (kernel)

gzip -cd /full/path/to/sda6.gz | dd of=/dev/sdb6

/dev/sdb10 (spare kernel?)

dd if=/dev/zero of=/dev/sdb10

Installing the disks

When both disks are prepared, you can put them in the 2Big 2. Switch it on, and the box will boot. After about a minute you can login on the webinterface, and choose your raid setting.

Backgrounds

The box contains an eeprom which contains the UUID's of the RAID partitions. When those UUID's don't match, the box won't boot. I *think* those UUID's are used to find out which is the 'real' disk when you exchange one. To be able to plugin new disks, a new disk is marked 'new', by existence of the string LaCieFirstBootLaCie starting at byte 1536 of the disk(s). When this label is present, new raidarrays are made, and their UUID is written to the eeprom. Further this mark is overwritten with net2big_v2.

When you ever need to put 2Big2 disks in a different box, you can overwrite this label, telling the box the disks are new. But this will destroy your data. A better way is to put the disks in, and connect on the serial port. The box will refuse to boot, and drop a shell. Then you can use mdadm to find out the UUID's, and md_eeprog to write them to EEProm.

Copied from www.steppen-wolf.eu:
i’ll leave this for reference, in case anyone else needs to rescue their data:

the OS is multiple RAID 1 partitions (presumably so you can hot-swap the drives), and the user data is a Linear, RAID 0, or RAID 1 array from /dev/sda2 and /dev/sdb2, with UUID stored in EEPROM on the board. the user data is assembled at /dev/md4 and mounted at /home

the partitions: 2=data, 5=swap, 7=boot, 8=root, 9=snap (config)
the raid arrays: /md0=boot, /md1=root, /md2=snap, /md3=swap, /md4=data

to get the UUID of the RAID you want to “plant” back:

mdadm --examine /dev/sda2

to get the UUID stored in EEPROM

/sbin/md_eeprog -g /dev/md4

to set the UUID to new one

/sbin/md_eeprog -s {UUID} /dev/md4

for other research for system surgery look in: sbin/lib/lacie/libraid

I don’t know what LaCie’s engineers were thinking… they’ve missed a really important design feature – being able to pull the drives and use them in another box without fuss.