traceroute google.com | sed 's/ /,/g' | sed 's/ ms / ms,/g'This take load balancers into account as well, so when your route slightly changes during a hop, you can still easily grok the data coming back. Basically, take double-spaces and replace them with a comma. The second sed is what takes the load balancers into account, fixing the output so it is the same as the prior hop test.
Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts
Saturday, November 26, 2011
Easily splitting and storing traceroute data
Traceroute is very useful, but the data it spits out can be a bit tough to chew on. I came up with this one-liner to make it more CSV compatible so you can Split() on commas and have the correct data where you expect it.
Tuesday, August 23, 2011
Inverting ebooks for better reading
I like ebooks. I don't like staring at lightbulbs. Hopefully, this one-liner will help others with the same problems I have with black-on-white text ebooks.
pdf2ps foo.pdf - | convert - -negate bar.pdf
Wednesday, April 27, 2011
Fun finding things
I found a neat way to use find today. If you want to do an inverse search (think grep -v, but in find), simply use '!'. For Example:
Find all files that don't end in .zip.
find . '!' -name '*.zip'
Find all files that don't end in .zip.
Sunday, November 7, 2010
Personal scripts for maintaining OpenDiagnostics Live CD
I am posting my scripts for maintaining my OpenDiagnostics Live CD so others can use them and add on to them as they see fit. Quite a while ago, I posted a fairly long shell script that I used to keep the ClamAV Live CD up-to-date with virus definitions, but the OpenDiagnostics Live CD is more comprehensive and a single script didn't make sense in my mind. If someone wants to modify them to locally update and modify a custom distro, feel free to take the scripts and edit them as you wish. If you come up with anything particularly neat, feel free to send me the code!
It is relatively easy to setup the project folder. The 'chroot' folder is just the filesystem.squashfs unsquashed. You can mount the ISO (
And without further ado, here are the scripts I use, in order from first-used to last-used:
chroot.sh
After chrooting, you can do what you wish with the chroot, install any apps you wants, modify or uninstall apps, etc...
clean.sh -- clean up the chroot before remastering. This is run after modifying the chroot.
remaster.sh -- Create a beta iso that you can test to make sure all your changes made were successful.
beta_to_stable.sh -- Finally, move your changes to stable and zip up the ISO for uploading.
It would be pretty easy to setup a cron to automagically update your chroot and remaster the ISO every week, month, whatever. Hope this helps.
It is relatively easy to setup the project folder. The 'chroot' folder is just the filesystem.squashfs unsquashed. You can mount the ISO (
mount -o loop image.iso tmp/) to create the image folder (just copy the contents of tmp/ to image/).
And without further ado, here are the scripts I use, in order from first-used to last-used:
chroot.sh
#!/bin/bash
sudo cp /etc/hosts chroot/etc/hosts
sudo cp /etc/resolv.conf chroot/etc/resolv.conf
#sudo cp /etc/apt/sources.list chroot/etc/apt/sources.list
sudo chroot chroot mount -t proc none /proc
sudo chroot chroot mount -t sysfs none /sys
sudo chroot chroot mount -t devpts none /dev/pts
sudo chroot chroot export HOME=/root
sudo chroot chroot export LC_ALL=C
sudo chroot chroot
After chrooting, you can do what you wish with the chroot, install any apps you wants, modify or uninstall apps, etc...
clean.sh -- clean up the chroot before remastering. This is run after modifying the chroot.
#!/bin/bash
sudo chroot chroot apt-get clean
sudo chroot chroot rm -rf /tmp/*
sudo chroot chroot rm /etc/resolv.conf
sudo chroot chroot umount -lf /proc
sudo chroot chroot umount -lf /sys
sudo chroot chroot umount -lf /dev/pts
remaster.sh -- Create a beta iso that you can test to make sure all your changes made were successful.
#!/bin/bash
sudo chroot chroot dpkg-query -W --showformat='${Package} ${Version}\n' | tee image/casper/filesystem.manifest
sudo cp -v image/casper/filesystem.manifest image/casper/filesystem.manifest-desktop
REMOVE='ubiquity casper live-initramfs user-setup discover1 xresprobe os-prober libdebian-installer4'
for i in $REMOVE
do
sudo sed -i "/${i}/d" image/casper/filesystem.manifest-desktop
done
sudo rm image/casper/filesystem.squashfs
sudo mksquashfs chroot image/casper/filesystem.squashfs -e boot
sudo rm image/casper/filesystem.size
sudo printf $(sudo du -sx --block-size=1 chroot | cut -f1) > image/casper/filesystem.size
(cd image && find . -type f -print0 | xargs -0 md5sum | grep -v "\./md5sum.txt" > md5sum.txt)
cd image
sudo rm ../OpenDiagnostics_beta.iso
sudo mkisofs -r -V "OpenDiagnostics Live CD" -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ../OpenDiagnostics_beta.iso .
cd ..
beta_to_stable.sh -- Finally, move your changes to stable and zip up the ISO for uploading.
#!/bin/bash
rm OpenDiagnostics_stable.iso OpenDiagnostics_stable.iso.zip
mv OpenDiagnostics_beta.iso OpenDiagnostics_stable.iso
zip OpenDiagnostics_stable.iso.zip OpenDiagnostics_stable.iso
It would be pretty easy to setup a cron to automagically update your chroot and remaster the ISO every week, month, whatever. Hope this helps.
Friday, November 2, 2007
GNOME 2.20 on Ubuntu Feisty
So, the only reason I wanted to upgraded to Gutsy was because I didn't feel like compiling GNOME 2.20 and Gutsy came with it pre-installed. Tonight, I folded and decided to compile GNOME since I really wasn't impressed with Gutsy. I already have a lot of devel packages installed on my system, so I am sure these are not the only packages you need (like libglib2.0-dev, libgkt2.0-dev, build-essential). I downloaded GARNOME from ftp.gnome.org and unpacked it to my desktop. cd to the garnome/desktop folder in the folder you just unpacked. If you have all the dependencies, make paranoid-install should work right out of the box, but for me I had to get the following packages:
sudo apt-get install libglitz1-dev libglitz-glx-dev libtiff4-dev python2.5-dev flex libgdbm-dev libxml-simple-perl libmagick++9-dev
Anyway, this week sometime Christer Edwards from the Utah LoCo group will be coming down to teach some classes and would like to get some dinner with me and a couple more of the guys in the DFW group. Hopefully, this will be Thursday.
Gentoo has taken about 3 days to compile so far, though I have messed up a lot. Once I get a working copy for a LiveCD, I will test it out and see how it performs against my Ubuntu ones. If the change isn't significant enough to spend more time on it, I will just drop it and use Ubuntu. So far, it is looking as if it will beat Ubuntu, but I can't say for sure, yet. I want to build a 64-bit and 32-bit version of each just to be thorough. I will try to have a solid beta of both Home and Pro uploaded by the end of the weekend (probably Ubuntu). I keep saying that to myself whenever I get free time ("yeah, lemme just upload this no...wait, what if I did this...") and I end up starting a lot of work on it that really isn't needed. It is a bad habit, but maybe I can get myself out of it.
I received my openSUSE 10.3 box this week and am absolutely thrilled. I can't wait to get it loaded in a VM to test it before I put it on a working computer. I am sure it will exceed my expectations, but I am just being paranoid.
Right now, I need to get out of the house, maybe I can convince Richard to go to Starbucks after his Calculus test...
sudo apt-get install libglitz1-dev libglitz-glx-dev libtiff4-dev python2.5-dev flex libgdbm-dev libxml-simple-perl libmagick++9-dev
Anyway, this week sometime Christer Edwards from the Utah LoCo group will be coming down to teach some classes and would like to get some dinner with me and a couple more of the guys in the DFW group. Hopefully, this will be Thursday.
Gentoo has taken about 3 days to compile so far, though I have messed up a lot. Once I get a working copy for a LiveCD, I will test it out and see how it performs against my Ubuntu ones. If the change isn't significant enough to spend more time on it, I will just drop it and use Ubuntu. So far, it is looking as if it will beat Ubuntu, but I can't say for sure, yet. I want to build a 64-bit and 32-bit version of each just to be thorough. I will try to have a solid beta of both Home and Pro uploaded by the end of the weekend (probably Ubuntu). I keep saying that to myself whenever I get free time ("yeah, lemme just upload this no...wait, what if I did this...") and I end up starting a lot of work on it that really isn't needed. It is a bad habit, but maybe I can get myself out of it.
I received my openSUSE 10.3 box this week and am absolutely thrilled. I can't wait to get it loaded in a VM to test it before I put it on a working computer. I am sure it will exceed my expectations, but I am just being paranoid.
Right now, I need to get out of the house, maybe I can convince Richard to go to Starbucks after his Calculus test...
Thursday, October 25, 2007
I love cats (not the animal)
Personally, I think one of the most useful tools in linux is cat (it is short for conCATenate). I use it for all sorts of things like sending text files through a pipe (stdin) to other tools such as sed. In fact, that is how I usually upgrade my system:
cat /etc/apt/sources.list | sed -e s/feisty/gutsy/g | sudo tee /etc/apt/sources.list ; sudo apt-get update ; sudo apt-get dist-upgrade
That sends the text file sources.list to sed (stream editor), replaces all instances of feisty with gutsy, and sends the end result to tee which writes the end text to the new sources.list. It then updates apt and upgrades my system accordingly. The reason I like this is because I can ssh into my box remotely and upgrade my system without the need of a GUI (I know there is ssh -X, but it can be dead slow over a bad connection).
Other good uses of cat are joining multiple files together (in conjunction with tar and split, you can do some pretty powerful stuff with backing up your data). Let's say you have a couple different PDF's that you would like to join into one, you would do something like this:
cat pdf1.pdf pdf2.pdf pdf3,pdf > new_pdf.pdf
Now you have all your PDF's in one handy one. This also works if you have a movie that is broken up into parts such as Stephen King's The Stand:
cat The\ Plague.avi The\ Dreams.avi The\ Betrayal.avi The\ Stand.avi > Stephen\ King\'s\ The\ Stand.avi
The result is a nice 2.7 gig file that has all the parts in it.
Ok, so putting files together is really useful, but what about taking them apart? split can help us. Let's say you backup your system to a tarball every week and the resulting tarball is several gigs and all you have is a CD burner. How do you get those backups back to your computer quickly using CD's?
split -b 650m backups.tar.gz
That breaks up your backups tarball into x amount of 650MB files (they are named by default aa, ab, ac, ad, etc...) that can be put together with cat after being transferred to your host machine.
cat /etc/apt/sources.list | sed -e s/feisty/gutsy/g | sudo tee /etc/apt/sources.list ; sudo apt-get update ; sudo apt-get dist-upgrade
That sends the text file sources.list to sed (stream editor), replaces all instances of feisty with gutsy, and sends the end result to tee which writes the end text to the new sources.list. It then updates apt and upgrades my system accordingly. The reason I like this is because I can ssh into my box remotely and upgrade my system without the need of a GUI (I know there is ssh -X, but it can be dead slow over a bad connection).
Other good uses of cat are joining multiple files together (in conjunction with tar and split, you can do some pretty powerful stuff with backing up your data). Let's say you have a couple different PDF's that you would like to join into one, you would do something like this:
cat pdf1.pdf pdf2.pdf pdf3,pdf > new_pdf.pdf
Now you have all your PDF's in one handy one. This also works if you have a movie that is broken up into parts such as Stephen King's The Stand:
cat The\ Plague.avi The\ Dreams.avi The\ Betrayal.avi The\ Stand.avi > Stephen\ King\'s\ The\ Stand.avi
The result is a nice 2.7 gig file that has all the parts in it.
Ok, so putting files together is really useful, but what about taking them apart? split can help us. Let's say you backup your system to a tarball every week and the resulting tarball is several gigs and all you have is a CD burner. How do you get those backups back to your computer quickly using CD's?
split -b 650m backups.tar.gz
That breaks up your backups tarball into x amount of 650MB files (they are named by default aa, ab, ac, ad, etc...) that can be put together with cat after being transferred to your host machine.
Subscribe to:
Posts (Atom)