Well, not sure if this is interesting but just wanted to share for people who are not that nerdy geeky in the inside. Anyhow, file system recognizes the characters base on bytes when it's being stored like ASCII character which takes a byte in every character and UTF-8 character takes 2 bytes. Specifically, I'm using Mac OS X, so I'm on HFS+ filing system. Try creating a file with, $> vim test In your file, insert character "p". If you check the file size by "ls -alth test", you'll notice it takes 2B of size. It's because the character "p" takes a byte, while it also inserts the line feed , which takes now 2B of the size including the inserted character (line feed). If you try to edit/open the file in a hex application like Hex Fiend, you'll notice that in the left side, it has a hex value of "50" plus the inserted "0A" (it has the value of "500A") which has the value of 10 in decimal...