Sunday, November 30, 2014

Ubuntu 14.04 Converting Video Files

Introduction

The idea here is to convert all the files into the .mp4 format so that they can be played in chrome and via the chromecast. Mostly files will come in as .avi or .mkv files. So this covers converting those two types to .mp4

Downloading AVCodec

First you will need to obtain the avcodec application, it can be obtained using the following command:

sudo apt-get install libav-tools

Finding Files

A Handy command for finding the files you need to convert:

find . -name "*.avi"

find . -name "*.mkv"

Converting AVI to MP4

Unfortunately moving from avi to mp4 requires transcoding, which is a longer effort

avconv -i {input_file}.avi -c:v libx264 -c:a copy {output_file}.mp4

Converting MKV to MP4

Since mkv is mostly compatible with mp4, you just need the streams moved from one container to another

avconv -i {input_file}.mkv -codec copy {output_file}.mp4

Sometimes the audio is in AC-3, or has multiple channels, things that chromecast/browsers may not appreciate. So in those cases you can choose to transcode the audio

avconv -i {input_file}.mkv -vcodec copy -strict experimental -acodec aac -ac 2 -ab 93k -{output_file}.mp4

You could also transcode into mp3, which is a generally accepted format for .mp4. This also has the benefit of not requiring an experimental feature set

avconv -i {input_file}.mkv -vcodec copy -acodec mp3 -ac 2 -ab 192k -{output_file}.mp4

Tuesday, August 5, 2014

Creating Stereo Renders in Blender

Recently I've been playing around with my Google Cardboard Viewer, and one of the things that I wanted more of was 3d content that could be viewed through cardboard. I eventually fired up blender and looked around for some resources on making stereoscopic 3d images. The first thing I found was Sebastian Schneider's plugin that automates the creation of a number of stereo cameras. The plugin didn't work great in the latest version of blender (2.71 while I was working on this) but does get most of the flow done.

I started with an animated cube, coming from behind the zero-parallax plane towards the cameras, and then retreating back behind it. In order to be able to render two cameras at once in blenders node compositor, you have to duplicate the scene, and then set default cameras for each scene. So after setting up the cameras, I duplicated the scene (Something I learned later that Sebastian's plugin does... although it complains while doing it). I then set up some nodes to render, lens distort, scale, transform, and mix the two scenes. Here's a shot of the node graph:

Unfortunately the effect wasn't great. Perhaps because there wasn't anything to compare distance against.

I then decided that maybe some more complex geometry would help. So I went and grabbed Andy Goralczyk's Creature Factory 2 turntable asset. I turned on the stereo cameras, set up the nodes, and rendered a shot. I quickly found that with a scene that takes ~20 minutes to render, that it might actually not be a great idea to render both shots together, distort, scale, and transform the beautiful renders while tweaking with the params. So I decided to go through a slightly different workflow. I would render each camera separately to a sequence folder. Then set up a different blend file to composite, and produce the images/videos. Here's the first render, that I posted on g+ that went through the original workflow:

Here's a few shots of the new workflow, that allowed me to scale out my renders to 4 different machines to make the overall process a bit faster. Also note that I scaled down the distance between the cameras to improve the perception of the size of the creatures.

And finally here is the HD Turntable Render, viewed best in Google Cardboard:

Making Stereo Pictures

So I've been messing around with Google Cardboard and one of the things that I wanted to do was create 3d pictures that I could view through cardboard. I remember one of my friends back in the day with the HTC EVO 3D, a phone that could take, and somewhat view 3d pictures. The phone had duel cameras on the back of the phone, and when you took a picture it would snap a picture from both cameras.

I just decided to slap my nexus 5 and my wife's nexus 5 together and take a few pictures to see how well I could reproduce the effect. Turns out it works quite well. Here's the 3 pictures that turned out decently:

In essence, you just put the two phone/cameras at the same level, and then distance them about eye distance from each other, make sure that they are focused on the same target, and that they aren't facing said target, but are both pointed parallel down range. Snap the pics at about the same time.

Now take the two photos, open up your favorite photo editor, scale them down, and place them side by side, view in cardboard and enjoy a surprisingly decent 3d image.

Some things that you may want to do would be to color balance, and generally do everything you can to make sure the two images look near identical.

Monday, June 16, 2014

Make Tomcat Startup by default in Fedora 20

So today I got tired of starting up tomcat manually, and decided to make it start by default at boot. After doing a bit of digging on the intertubes I found that if you create a script in /etc/rc.d called rc.local that is executable fedora will run it on boot. So I created the file and added the following lines:


#!/bin/sh
cd /home/sean/Public/apache-tomcat-8.0.8/bin
su sean ./catalina.sh start

Since root will be the user at startup I wrapped the catalina.sh command in a switch user command to my user. Thus it is started in a sandbox for my user.

Saturday, June 14, 2014

Getting set up on PGP with GPG

PGP, or Pretty Good Privacy is a standard in personal cryptography. It can encrypt your emails, sign documents with a cryptographic signature, encrypt files on your harddrive, and (my new favorite) sign git commits. One of the problems with cryptography is that getting things encrypted and decrypted only gets you half there. The harder part of the equation, at least for us users, is making sure that you are talking to the right person. This means that someone has to vet people, their ids, their domains, emails, and their cryptographic credentials. PGP takes this away from corporations and moves it to a more p2p model called the web of trust, where any user is given the power to vet and approve other users.

GPG, or GNU privacy Gaurd is an implementation of PGP and can easily be found on linux systems. There is gpg and gpg2, the first being a monolithic binary, and the latter a more modularized library allowing gpg to tie into gnome, guis, and other libraries. This blog will cover the command line tools only.

To get started the first thing that you'll want to do is get yourself a public/private key combination. Before diving into the commands, let me take a minute to talk about the weaknesses of the system. Once you've generated your keys, you'll be set for uber secure communication, without some futuristic computing power, it would take even the most powerful modern computers thousands of years to break your messages using brute force methods. Therefore most people won't even attempt to do so, they will try and use alternate means to get ahold of the private messages. The easiest way is to try to get ahold of your private key and use that to break the messages. While setting up your PGP key you'll have a chance to password protect it. Typically password attacks on the private key will be much more successful especially if you use a weak password. So moral of the story is to try to pick a password that will be hard to guess, and that's long enough that it makes it hard to brute force as well.

Alright, let's get started.

gpg2 --gen-key

Ok, now come the quesitons

  1. First up, it will ask you what sort of key that you want, if you want to do all the things mentioned above you'll want a dual RSA key, or option 1 on my machine.
  2. You'll than have a chance to decide the size of your keys, 2048 being standard as of the writing of this post, 4096 if you want to make all the things secure.
  3. Next you'll tell it how long your key should be valid for, most people will opt for an infinite option (0) and then utilize key invalidation to revoke a key that has been compromised, expired or for whatever other reason. I've read that if you are using the key for some short term, less personal reason, like a campaign, or some event, you can utilize the expiration mechanism to express your interest in a short term communication means
  4. Now you get to identify yourself, keep in mind that if you are going to try to build the web of trust, you'll be proving who you are through id cards, and it'll make things easier if everything matches up.
  5. The final step is to specify the password that will guard your private key
gpg: key A6BAE9BD marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
pub   2048R/A6BAE9BD 2014-06-15
      Key fingerprint = E0E6 3484 5581 0B28 8EB8  2B68 EB65 E9D5 A6BA E9BD
uid                  admin account 
sub   2048R/144081B9 2014-06-15

Let's examine some of the parts here. So basically you have a public key, this is the one that you'll be sharing with all your cohorts, putting up on your webpage, broadcasting to the world, and reading off as a fellow pgper examines your id cards. Essentially your public key is hashed into a fingerprint, which is much shorter than reading off the actual key. The full fingerprint is 40 hex digits, which is often shorted to the last 16 or 8 to make things more convenient. You'll then notice your userid down below that, uid's or uat's are considered components of a public key. In fact when people certify you, they certify a uid in combination with a public key. You can have multiple uid's on your public key, and even pictures. Finally you have a sub key, this is a binding to your private key, so people can author messages to your private key through your public key. Remeber the first RSA and RSA step, you have both of them here, however only one will ever be fully specified to other users, as your private key is just for you.

Alright, at this point it would be wise to back up your private key. You can export it to a file, which is well suited for some offline storage format like a cd/dvd. You could burn it, and then tuck it away somewhere safe. That way if you ever lose your key, you can still get back the secret key. Remember however that the password that you set on your private key will remain. So if you forget your password, you are still very much out of luck. I suppose if you are forgetful enough you could remove the password (ie. set it to empty) on the private key before you export it. Or even make the password a part of the file, or as a readme in a different file.

gpg2 --export-secret-key --armor --outfile private-key.asc

The command above will export your private key to the file private-key.asc in a base64 encoded format which is safe to view in a text editor. Since your private and public are so bound together it also includes your public key. The private key is still password protected at this point.

The other thing that you should do strait away is set up a revoke certificate on your public key, and keep this somewhere safe. This will help you in the case that you forget your password, you can revoke your key, which will let everyone know that it is invalid.

gpg2 --armor --output revoke.asc --gen-revoke <keyid>

Alright, now all the details are sorted out and you are ready to go. Let's first address how to distribute your public key to your cohorts

gpg2 --keyserver pgp.mit.edu --send-key <keyid>

The above will send a key up to a keyserver, in this case the mit pgp server. Once you put a key up to a keyserver there is no going back, those keys are up there forever. So make sure that you are really sure you want to do it. Once they are up there, it makes life much easier for those wanting to obtain your key.

gpg2 --keyserver pgp.mit.edu --recv-key <keyid>

For example, someone could grab your key from the keyserver using the above command and the short fingerprint of your key.

gpg2 --armor --output public-key.asc --export <keyid>

So if you don't want to make your key public quite yet, you can export it in the ascii armored format. You can then send this file to your friends via email, you can put it up on your website, or expose it to those that you want to communicate with in a plethora of ways.

gpg2 --import someones-public-key.asc

And you can also pull in someones public key this way as well.

As mentioned before, you are an agent of trust for pgp keys, so once you obtain someones public key, and before you start using it, it's often good to sign their public key to 'certify' it. In fact gpg will warn you when you try to send messages to people keys that you haven't signed, or otherwise obtained trust in. In gpg land this signing is yours and yours alone to shoulder. You could sign anyone and everyone's key if you wanted, and make your web of trust huge. This might be bad though, you might inadvertently sign a pretenders key, and mislead people to believe that the pretender is a person they wished to securely communicate with. So usually it's better to meet the person face to face, and verify through state issued ids that they are who they say they are and that they own the key that they profess to own. Occasionally groups of people will get together and sign each others keys. These parties of nerdiness are called key signing parties, and help the web of trust grow and expand.

gpg2 --ask-cert-level --sign <key or uid>

And this is a way that you sign someones key. This specific way will allow you certify the key to a certain level. 0-3. This is where you can start being a little more relaxed in how you sign someones key. If you meet someone in passing and they show you a fingerprint, and you hear someone call them bob, then maybe you can sign to a level 0 or 1. If you communicate via email or phone, maybe a 2. And if you do all the above, and also see a state issued id and passport, a 3 would be reasonable. In the end though it is up to you.

Now let's say you want to encrypt a message to someone.

echo abcdefghijklmnopqrstuvwxyz > message1.txt gpg2 --armor --recipient <keyid> --encrypt message1.txt

This will create a new file called message1.txt.asc, which will be ascii encoded encrypted version of message1.txt.

rm message1.txt gpg2 --decrypt message1.txt.asc --output message1.txt

This will remove the original text file, and then decrypt the encrypted file and output that back into message1.txt

gpg2 --armor --sign --recipient <keyid> --encrypt message1.txt gpg2 --decrypt message1.txt.asc

In this last step we added on a signature, which basically hashes the original message, and then uses your private key to sign the hash. Basically in all the previous examples we have shown how anyone with your public key can send you an encrypted message. However, even though you may recieve it from their email, it does not necessarily mean that it came from that person. A signature crytographically ensures that the person (or in an unfortunate scenario people) that holds the private key saw the message exactly as you verified it.

You can also send messages to multiple recipients, and because the file or message is really encrypted with a symmetric cipher like AES, only the key to the symmetric cipher needs to be encrypted for each recipient. Therefore the overhead is quite small

OpenSSL Certificates to Tomcat Keystores

So being through this process a few times, and having to relearn it every time has not been fun. But before we get all technical let's do a little review. SSL/TLS allows clients/webbrowsers to securely communicate with servers. It utilizes public key cryptography (RSA or Elliptic Curve) to allow the server to hand out a key to the client. The client can then use that key to encrypt a message that only the server will be able to read (By decrypting it with a private key). This is great and all but unfortunately being able to securely communicate doesn't prevent the problem of communicating to the wrong person. This is where Certificate Authorities (CA) come into play. They verify that the person is who they say they are, and that they own the domain that the key is tied to. Once they have verified all that they "sign" the public key of the server with their private key. Now clients can obtain the public key (signed by the ca) from the server, and then use the ca's public key to verify that the signature was indeed from the ca, and then build trust that not only can they securely communicate with the server, but that they are communicating with the right server.
So in order to get started you have to create your private/public key pair. The public key will be the one that is handed out to everyone, and also the one that will be signed by CA's, while the private one will be kept only by you, and likely your server in order to decrypt all the traffic coming in, and also encrypt the traffic going back out.

openssl genrsa -out <yourdomain>-private-key.pem <key size in bits>

This command will generate a RSA keypair and save it with no password protection. Some common keysizes (from least secure to better secured) 1024, 2048, 4096.


openssl pkey -text -in <yourdomain>-private-key.pem
openssl pkey -in <yourdomain>-private-key.pem -pubout -text

These commands will pour out the different components of the key to the console. The first one generating the base64 encoded private key part, and the second one the base64 encoded public key part.

Ok, so now you have the keys that allow you to set up secure communication, the next step is to get your public key signed by a CA. CA's generally accept a certificate signing request (.csr), this consists of your public key, and a bunch of information about that key, most importantly is the "common name" or CN, which should resolve to your domain name. Mine being murphysean.com, or www.murphysean.com, and if your CA allows you could do *.murphysean.com to allow the certificate to work on all subdomains.

openssl req -out <yourdomain>.csr -key <yourdomain>-private-key.pem -new -sha256

The next command takes in the public/private key combo, a bunch of information from you on the command line, and pumps out a request that the CA can accept and sign to produce your final certificate. The -sha256 will request that the authority sign a sha256 hash of your public key combined with all the information gathered from you, you can leave it off to let it default to something more common. The more you pay the more the CA will be willing to certify. For most of your certificate authorities cheaper signing request options, they are only signing the CN. So don't be suprised when your final certificate has nulls for all the values you filled in.

cat <yourdomain>.csr

This will dump the csr text file to the console for easy copy and paste into your ca's web site form. At this point you should have the certificate from your CA. Save this file as <yourdomain>.crt

They will often also give you their root certificate, and sometimes their intermediate certificates as well. You'll want to nab those up at this time and save them. At this point you have all the pieces you need. I'll include some instructions for nginx at this point.
To get set up in java/tomcat you still have a bit of work to do. Tomcat uses a java keystore that should contain the server certificate. The first step in getting a keystore set up is to get your certificate into a a PKCS12 file. OpenSSL can do this for us.

openssl pkcs12 -export -in <yourdomain>.crt -inkey <yourdomain>-private-key.pem -out <yourdomain>.p12 -name <yourdomain> -CAfile <yourca>.pem -caname <yourcaname> -chain

This will merge all the different certificates into a chained pkcs12 file. You'll want to password protect the private key, with a password that is ok for your server application to know. Also as a side note here, if you were signed by an intermediate certificate, you'll want to obtain all the certificates that fall between you and the root certificate. Open the intermediate file in a text editor, and then just start appending all the certificates in order up to the root certificate. In many cases this is just the one root certificate as there is only one intermediate inbetween you and the root.

keytool -importkeystore -srckeystore <yourdomain>.p12 -srcstoretype PKCS12 -alias <yourdomain> -destkeystore <yourdomain>.keystore

So now you've got everything in order the last step is to just get your tomcat server pointed at your keystore for https/ssl/tls traffic. Open up your server.xml file, and uncomment the SSL Connector. I added two properties to mine, the keystoreFile property which I pointed at my .keystore file, and the keystorePass which unlocked the keystore, and also the private key in that keystore.

<Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS"
keystoreFile="<pathtokeystore>.keystore"
keystorePass="<ubersecurepassword in plaintext>"/>

In tomcat 8 the final working Connector looked like the above.

Thursday, February 13, 2014

Google Fiber Mount smb on Linux (Fedora/Ubuntu)

Well I recently had the privilage of having google fiber installed at my house. Since I opted for the good ol' extra tv package, I got a nice 2TB network hard drive. It allows local machines to connect with it through a samba file server so that you can back up all your photos, songs, and videos.

This is really nice, something I'd really like to do is share my videos not only with my tv box, but also with my web server, so that I can serve these videos, songs, etc through my web site. Obviously getting the local gfiberstorage server to take a peek into my 500gb servers hard drive would be quite a bit to chew off. Also, why use my server hard drive when I have this nice 2TB local HD.

So the next choice was to map the remote gfiberstorage drive into my webservers root file system so that it could get at all the videos through the network. I've only mounted through the GUI on linux and that wouldn't work this scenario so I had to do a little research into mounting network drives into the linux file system.

I decided to start on my Fedora 20 laptop and figure it out there before moving to the ubuntu box.


  1. Create a mount point. It appears /media seems to be a popular mount point, in addition to /mnt. I chose media cause it seemed to fit. sudo mkdir /media/gfiberstorage/videos/
  2. Mount the samba share. This step took me quite a while to figure out. At first I just mounted it and it asked me for a password for root, which i typed root, and it worked. However, only my root user was able to read and write into the directory. It took me awhile but I found the a few options that helped me mount it better. These options go behind the -o. There is:
    1. guest
      1. This removes the need to add a user/pass
    2. rw
      1. This would allow both read and write access (The default for the user)
    3. uid=<local username>
      1. Sets the local groupname that should own the dir/files of the mount
    4. gid=<local groupname>
      1. Sets the local groupname that should own the dir/files of the mount
    5. iocharset=utf8
      1. Allows you to access files/dirs with UTF-8 (non english) chars
    6. In case you want to log in with user/pass
      1. username=<remote username>
      2. password=<password for remote user>
      3. credentials=/home/userdir/.smbcredentials
        1. You can also store your creds in a file and reference that to keep your user/pass secure on the machine. In this case just put the username=?? and password =?? on the first two lines of the file.
    7. file_mode=0777,dir_mode=0777
      1. If you want to spin up the mount with file and dir privs. You could do this and keep ownership to root, but still allow other users to work in the dir
    8. sec=ntlm
      1. I didn't need this on either my fedora 20 or ubuntu 12.04, but apparently due to some recent moves of cifs mount code into the kernel this option may be necessary in some cases.
    9. _netdev
      1. If you are mounting during bootup this option will delay creating the network connection to the remote file system till the network is in place. source
I ended up with this guy in order to mount 
sudo mount -t cifs -o guest,rw,uid=sean //gfiberstorage/videos /media/gfiberstorage/videos

Now on to my ubuntu box, had to install cifs-utils
sudo apt-get install cifs-utils

After which I could mount using the same command/options as the fedora box.

And when your all finished, you can unmount via
sudo umount /media/gfiberstorage/videos

And now I can dump files directly to the network share, access the videos through the google fiber interface, and also serve them out through my http server. Pretty cool!

The main reason I wanted to do this is my chromecast requires http urls in order to play videos through it's browser. I've managed to write an application that lets me serve videos to the chromecast from my local network, my public web server, and yes also files I've thrown up to google drive (That 1TB they add to your account makes this really nice)