66 links
  • Ban's links
  • Home
  • Login
  • RSS Feed
  • ATOM Feed
  • Tag cloud
  • Picture wall
  • Daily
Links per page: 20 50 100
11 results tagged Linux x
  • thumbnail
    Recover deleted/replaced files on EXT file systems

    I just had to try and recover a couple files that a buggy program replaced with empty files instead of their actually updated content. Context is an EXT4 FS, on a secondary data partition (and even disk, but that's unrelated).

    Linked post is interesting, but a bit over-doing it: no need to actually back the journal up in my case (tho I did it), nor unmount the partition to use either tool.

    Additionally the linked article talks about deleted files, whereas here I wanted to recover previous versions of the content of existing files. I guess this requires the program not having rewritten the same blocks, but in my case the program both writes to a temp file first and then renames over (although it happily replaced with an empty file), and wrote a 0-bytes file which likely wouldn't overwrite anything. Anyway, for this use case, the key is the -b option to give it a time frame that does not include the faulty rename.

    So, basically what I did:

    • Remount the partition read-only to avoid any additional writes that could corrupt the data blocks: `sudo mount /dev/sdb1 -o remount,ro
    • Backup the EXT4 journal just in case (but I highly doubt that was of any use, I could have used the actual FS's journal): sudo debugfs -R "dump <8> $HOME/sdb1.journal" /dev/sdb1
    • Trial version listing potential recoveries: sudo ext4magic /dev/sdb1 -a $(date -d "-2hours" +%s) -b $(date -d "-45minutes" +%s) -f relative/path/to/files/ -j ~/sdb1.journal -l
    • Actual recovery: sudo ext4magic /dev/sdb1 -a $(date -d "-2hours" +%s) -b $(date -d "-45minutes" +%s) -f relative/path/to/files/ -j ~/sdb1.journal -r -d RECOVERY/

    At this point I had the files in their state from 45 minutes ago, validated the recovery and remounted read-write. Done.

    This was surprisingly easy, thanks to journaling FS :)
    To be fair, having the lost data outside the root or home FSes helped a lot, not only because of random applications potentially writing stuff if any mutable data is stored there (/home, /var/run, /tmp and whatnot), but I could also easily install the tools I missed straight away without risks of overwriting precious data blocks.

    March 14, 2025 at 17:04:28 GMT+1 * - permalink -
    QRCode
    - https://gist.github.com/ebautistabar/cca12863e6335d08a019f015f53fac4a
    Linux data-recovery ext4
  • Resolve mDNS/Bonjour local network names from CLI

    ping can resolve .local domains, but e.g. host can't, which is annoying when trying to resolve the IP address of a mDNS host from a e.g. script.
    Fortunately, there is avahi-resolve that can do that, and is easy enough to use in scripts.

    The reason I needed that was to consolidate a setup where a network scanner provides a mDNS name, but it can't be used in the SANE device path, only the IP works. So, replace the IP field in the call with "$(avahi-resolve --name -4 NAME.local | awk '{print $2}')" and voila.

    November 13, 2024 at 15:52:24 GMT+1 * - permalink -
    QRCode
    - https://manpages.debian.org/testing/avahi-utils/avahi-resolve.1.en.html
    mDNS DNS Linux shell Debian Avahi sane cups
  • Life time estimation - Working with eMMC - Working_with_eMMC.pdf

    It is possible to get an estimation on the health status of the device by checking the parameters
    EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_A and EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_B.
    The estimation is given in steps of 10% so a value of 0x01 means that 0% to 10% life time used.
    This functionality was introduced in eMMC 5.0.

    # mmc extcsd read /dev/mmcblk2 | grep EXT_CSD_DEVICE_LIFE_TIME_EST
    eMMC Life Time Estimation A [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_A]: 0x01
    eMMC Life Time Estimation B [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_B]: 0x01

    It's a bit tricky to find how to get an estimated SSD lifetime for an eMMC board, but here is how to use and read mmc tool (from mmc-utils) for that.
    This is similar to SMART's SSD_Life_Time, although the value is a bit counterintuitive.

    May 21, 2024 at 16:00:13 GMT+2 * - permalink -
    QRCode
    - https://www.embeddedartists.com/wp-content/uploads/2020/04/Working_with_eMMC.pdf#%5B%7B%22num%22%3A46%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C111%2C626%2C0%5D
    linux eMMC SSD lifetime drive
  • Synchronizing package list between computers on Debian

    There's a lot of info on the web, but I never can find a complete canonical way of doing this when I need it, so here's mine:

    On the source machine, dpkg --get-selections > my-selections (here, everybody agrees though)

    On the target machine:

    # apt update
    # apt-cache dumpavail | dpkg --merge-avail
    # dpkg --clear-selections  # that's dangerous if you don't set reasonable selections afterwards!
    # dpkg --set-selections < my-selections
    # apt-get -u dselect-upgrade

    And voila.

    February 20, 2024 at 17:45:15 GMT+1 * - permalink -
    QRCode
    - https://ban.netlib.re/shaarli/shaare/3_cb-Q
    Debian linux dpkg apt
  • Importing another process' environment (bash)

    To import the environment of another process, one can use while read -d '' -r ev; do export "$ev"; done <"/proc/$(pgrep -u "$USER" -x PROCNAME)/environ" (when using bash).

    This is particularly handy e.g. when connecting to a machine through SSH while a graphical session is running and wanting to interact with it (X, DBus, etc.).

    This leverages some Bash specifics, like read -d '' to use NUL as the line separator. There are solutions only using POSIX constructs, but the only I know involves a temporary file, which is not as handy. Before discovering read -d '' I was using another Bashism: process substitution, in the form of <(tr '\0' '\n' </proc/$(pgrep -u "$USER" -x PROCNAME). It isn't as good as it would not properly handle newlines in the environment, though, but it could easily be converted to a POSIX-compliant construct using a temporary file. Note that the naive alternative of piping the same thing to the while loop (and thus to read) will not work as it would run the loop in a subshell, not affecting the environment of the current shell. Another alternative would be to evaluate the output of such a subshell that would echo it, but that would require escaping the values, to which I don't know a robust POSIX solution (there are plenty of handmade ones around, but most fail in odd corner cases -- and no, printf %q is not in POSIX).

    May 11, 2022 at 21:50:00 GMT+2 * - permalink -
    QRCode
    - https://ban.netlib.re/shaarli/shaare/xJy3dg
    bash shell linux
  • SSH ProxyCommand example: Going through one host to reach another server

    Très pratique quand pour accéder à un host on doit rebondir sur un autre.

    La solution naïve est bien sûr ssh host1 -t 'ssh host2', mais il y a plusieurs problèmes :

    • ne peut pas être automatisé proprement (autant que je sache) : pas de ssh unalias
    • nécessite -t pour un shell interactif
    • comme le second SSH est bêtement lancé sur la seconde machine, l'agent SSH local n'est pas disponible (on pourrait utiliser -A, mais ça apporte son lot de problèmes)

    La solution via ProxyCommand est beaucoup mieux, puisque ça donne un proxy au SSH local lui-même, donc pas besoin de -t explicite, et pas besoin de forwarder l'agent (ça se passe normalement à travers le tunnel ainsi créé).
    <edit> De plus, comme c'est une configuration au niveau de SSH, tout ce qui l'utilise en profite : scp, rsync, etc. </edit>
    <edit2> Ajout de la version avec ssh -W à la place de nc </edit2>

    J'en arrive personnellement à ça (avec les hosts renommés)

    # jump to server2 through server1
    Host server2
        ProxyCommand ssh -W %h:%p %r@server1
        # ou alternativement avec `nc` si ssh est trop vieux pour connaître -W
        #ProxyCommand ssh %r@server1 nc %h %p

    Bien sûr ça se combine avec un alias local facilement :

    # jump to server2 through server1
    Host localalias
        Hostname server2
        ProxyCommand ssh -W %h:%p %r@server1

    À noter que le lookup de server2 a lieu sur le proxy, donc un nom local au proxy marchera -- mais pas l'inverse, un alias local sur le client ne résoudra pas.

    Aussi, puisque la commande proxy est un SSH, on peut lui passer toute option SSH (dans mon cas, -4 par ce que le lien IPv6 entre moi et l'host a une forte latence que le lien IPv4 n'a pas).

    July 26, 2016 at 22:51:34 GMT+2 * - permalink -
    QRCode
    - http://www.cyberciti.biz/faq/linux-unix-ssh-proxycommand-passing-through-one-host-gateway-server/
    ssh linux
  • thumbnail
    Linux tool to show progress for cp, rm, dd, ...

    Petite commande sympa pour connaître la progression d'une opération sur des fichiers (cp, sha*sum, dd, etc.), qui semble marcher vraiment et qui est packagée dans Debian (unstable).

    September 16, 2015 at 12:56:16 GMT+2 * - permalink -
    QRCode
    - https://github.com/Xfennec/progress
    linux
  • LVM: Allocate a LV at the end of the PV - 4.4. Logical Volume Administration

    I wanted to create a new LV on one of my VGs, but I wanted it to be allocated at the end of the PV because I knew that I wouldn't want to extend it (or if I ever did, I wouldn't care about its content -- it was for a /tmp partition), and I wanted the LVs on that same PV to be able to extend linearly if possible. Basically, I wanted to create a new LV that wouldn't get in the way.

    The only solution I found was kind of the nuclear one, but it works great: specify which Physical Extents (PEs) the LV should be allocated on. This can easily be specified to lvcreate as a :PEStart-PEEnd suffix to the PhysicalVolumePath argument.

    But to provide the correct PE range, we first need to find what "correct" is here. For that, pvdisplay --maps comes in handy, showing how PEs are allocated on a PV. Once you got the free PE range(s), and the PE size, you can easily calculate how many PEs you'll need, and so which range to specify. Beware though, apparently if you want to allocate the very last PE, you should not specify it, but simply give an "open" range, like ":PEStart-": otherwise, lvcreate told me the range was incorrect (and if you guess it was a 0-based PE value and lower start and end by 1, you get 1 free PE at the end).

    As you provide PE ranges, I suggest you provide the size of the LV in Logical Extents (LEs, --extents) instead of megabytes or gigabytes (--size) -- anyway you already calculated how many PEs you needed for the size you wanted.

    I also chose to explicitly specify the allocation policy as contiguous, just to be sure LVM wouldn't try and be a smartass behind my back in case I messed up with my ranges.

    In the end, I ended up with this command:

    lvcreate --alloc contiguous -l768 -n NewLVName VGName /dev/mapper/PVName:37149-

    And a pvdisplay --maps nicely shows I got free PEs only before that LV -- so you could extend the previous one even with the contiguous allocation policy.

    PS: well, actually I didn't use this command exactly, because I didn't know about that last PE range issue, so I first allocated it one PE too short of the range I wanted. But then, I decided it was alright and just extended the LV by one PE, using lvextend --alloc contiguous -l+1 /dev/mapper/VGName-LVName /dev/mapper/PVName:37148-.

    March 2, 2015 at 16:39:35 GMT+1 * - permalink -
    QRCode
    - https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Cluster_Logical_Volume_Manager/LV.html#create_linear_volumes
    lvm linux
  • Specifying the device for early networking - klibc/klibc.git - klibc main development tree

    I had problems with selecting the device to use for early-networking, because apparently its name was not very stable (e.g. upgrade from Linux 2.6.26 to 2.6.32 had changed it a the time, but there was more), leading to my early networking to fail unexpectedly -- which is especially problematic if you need it to boot.

    Trying once again to fix this, I noticed that the documentation of the ip kernel parameter specifies that the device might be empty and then the kernel with try to find the first that actually works:

    <device> Name of network device to use. If this is empty, all
    devices are used for RARP/BOOTP/DHCP requests, and the
    first one we receive a reply on is configured. If you
    have only one device, you can safely leave this blank.

    Although this snippet is not very clear on whether it would work with a static IP, it does work just fine :)
    So, if your setup has no risk of finding an inappropriate interface (e.g. if you only have one actually working in early boot) it's very handy to simply let the kernel choose for you.

    February 18, 2015 at 16:12:46 GMT+1 * - permalink -
    QRCode
    - https://git.kernel.org/cgit/libs/klibc/klibc.git/tree/usr/kinit/ipconfig/README.ipconfig
    linux kernel networking net
  • Accéder à un dossier personnel chiffré Ubuntu (avec ecryptfs)

    Le monde n'est pas super clair à ce sujet, donc un petit tuto/récapitulatif :

    1) il faut bien sûr commencer par monter le système de fichier racine qui contient les ecryptfs (la partition système du Ubuntu) :
    sudo mount /dev/sda1 /mnt
    (/dev/sda1 est bien sûr à adapter au cas où)

    2) à partir de là il y a deux solution : la solution « magique » avec le script ecryptfs-recover-private et la solution manuelle où l'on comprend ce qui se passe. S'il s'agit simplement de récupérer des données, je conseille le script qui est beaucoup plus simple et nécessite moins de connaissances sur l'ecryptfs à ouvrir.

    2.1) avec le script ecryptfs-recover-private il n'y a presque rien à faire, juste le lancer en root :
    sudo ecryptfs-recover-private
    On peut éviter de le laisser chercher l'ecryptfs à monter en lui donnant en argument :
    sudo ecryptfs-recover-private /mnt/home/.ecryptfs/<username>/.Private/
    (il faut bien sûr remplacer <username> par l'identifiant de l'utilisateur)
    L'outil va poser quelques questions simples (comme le mot de passe de l'utilisateur à qui appartient les données chiffrées), et normalement va monter une version (en lecture seule, sauf si l'option --rw a été passée) des données déchiffrées quelque part dans /tmp/ecryptfs.* (il dit bien sûr où).

    2.2) La solution manuelle est un peu plus complexe, et reproduit à peu près ce que fait le script. Le seul réel avantage est de comprendre ce qui se passe.

    2.2.1) Il faut connaître la passphrase utilisée pour l'ecryptfs lui-même, qui n'est pas le mot de passe de l'utilisateur à qui appartient les données. Si vous ne l'avez pas, il est cependant possible de la retrouver avec le mot de passe utilisateur :
    sudo ecryptfs-unwrap-passphrase /mnt/home/.ecryptfs/<username>/.ecryptfs/wrapped-passphrase

    2.2.2) maintenant il faut insérer les clés FNEK (chiffrement des noms de fichiers, FileName Encryption Key) dans le keyring :
    sudo ecryptfs-add-passphrase --fnek
    Cette commande demande la passphrase de l'ecryptfs telle que retrouvée ci-dessus, pas le mot de passe utilisateur.
    Cette commande affiche deux tokens, le premier est la signature du mount (ci après $mount_sig), et la deuxième celle du FNEK (si après $fnek_sig). On va en avoir besoin juste après.
    Note : il est nécessaire de ré-invoquer ecryptfs-add-passphrase avant chaque montage, car les tokens sont retirés du keyring lors du démontage.

    2.2.3) Il ne reste plus qu'à monter l'ecryptfs. On crée d'abord un point de montage bien sûr :
    mkdir ~/priv
    (ou n'importe où ailleurs)
    puis on monte l'ecryptfs :
    sudo mount -t ecryptfs /mnt/home/.ecryptfs/<username>/.Private/ ~/priv -o ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_sig=$mount_sig,ecryptfs_fnek_sig=$fnek_sig,ecryptfs_passthrough=n,no_sig_cache
    (certaines options ici ne sont pas absolument nécessaires et si non renseignées seront demandées interactivement, les donner sur la ligne de commande évite ça)
    La passphrase de l'ecryptfs sera demandée, et une fois renseignée le montage devrait réussir, et les fichiers disponibles dans ~/priv (ou quel que soit le point de pontage choisi) \o/

    3) Note : le point de montage appartient à l'UID de l'utilisateur à qui appartiennent les fichiers chiffrés, ce qui n'est pas forcement celle de l'utilisateur d'un live Ubuntu (999), donc il faudra peut-être changer les droits du point de montage après coup, ou accéder aux fichiers en root.

    January 25, 2015 at 17:26:02 GMT+1 * - permalink -
    QRCode
    - http://doc.ubuntu-fr.org/ecryptfs#recuperation_du_contenu_d_un_repertoirehome_chiffre
    ecryptfs linux ubuntu chiffrement encryption
  • How to blacklist a Linux kernel module using a boot parameter, AKA how to blacklist a module from Grub

    "Sometimes it may be necessary to blacklist a module to prevent it from being loaded automatically by the kernel and udev. One reason could be that a particular module causes problems with your hardware. The kernel also sometimes lists two different drivers for the same device. This can cause the device to not work correctly if the drivers conflict or if the wrong driver is loaded first.

    You can blacklist a module using the following syntax: module_name.blacklist=yes."

    August 25, 2014 at 14:38:27 GMT+2 * - permalink -
    QRCode
    - https://www.debian.org/releases/stable/i386/ch05s03.html.en#module-blacklist
    linux kernel trick module blacklist
Links per page: 20 50 100
Shaarli - The personal, minimalist, super fast, database-free, bookmarking service by the Shaarli community - Help/documentation