Posts

Showing posts from 2013

Ugrading MySQL (Percona) to the causing: dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)

I have encountered this when I was upgrading my packages and one of them is Percona Server. Then I encountered this strange error below, (Reading database ... 82838 files and directories currently installed.) Preparing to replace libmysqlclient18-dev 1:5.5.33-rel31.1-566.precise (using .../libmysqlclient18-dev_1%3a5.5.33-rel31.1-568.precise_amd64.deb) ... Unpacking replacement libmysqlclient18-dev ... dpkg: error processing /var/cache/apt/archives/libmysqlclient18-dev_1%3a5.5.33-rel31.1-568.precise_amd64.deb (--unpack):  trying to overwrite '/usr/lib/libmysqlservices.a', which is also in package libmysqlclient-dev 1:5.5.33-rel31.1-566.precise dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing:  /var/cache/apt/archives/libmysqlclient18-dev_1%3a5.5.33-rel31.1-568.precise_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) So my fix to that is remove  this package "libmysqlclient18*". I'

Ubuntu: locale: Cannot set LC_CTYPE to default locale: No such file or directory

For some reason, It seems that LC_CTYPE is not declared and can't use fallback to use LC_ALL value. So my very easy fix on this is to declare or define a global shell environment variable in my /etc/default/locale. Here's what my /etc/default/locale contains, toytoy@ubuntu-toytoygogie:~$ cat /etc/default/locale LANG="en_US.UTF-8" LC_ALL=en_US.UTF-8 Hope that helps.

Why Ubuntu in VM cannot connect to the internet

Usually, I got this problem when I start Ubuntu and I cannot connect to the internet. So my fix is just pretty simple, though either might work in your end or not. 1. update your date 2. restart service on /etc/rc.d/networking So in my case, how did I fix it is by $ service ntp stop $ ntpdate 10.0.1.19 $ service networking restart $ service ntp start So where did I have this 10.0.1.19? This is my other VM also, act as a server which interacts through the ntp socket and updates from the 10.0.1.19 server's time. If you don't have other VM, I'm not sure with this, just try to update your date to the recent date you have. Afterwads, just run $ /sbin/dhclient eth0 and hope everything will work into your end.

Benchmarks result using different storage technologies: FusionIO, SSD, RAID

I just wanted to share these great benchmark results for your guide, http://www.ntlug.org/Articles/DiskBenchmarks http://www.mysqlperformanceblog.com/2009/05/01/raid-vs-ssd-vs-fusionio/

MySQL: learning how to recover your InnoDB from corruption

This is wonderful post to learn how to recover your data from Peter Zaitsev, http://www.mysqlperformanceblog.com/2008/07/04/recovering-innodb-table-corruption/ This means that in every BTREE context, there's always a page and by using BTREE data structure, you have records stored in there as part of the page and that means you can recover your data from its tablespace.

MySQL: Like this blog from Miguel. A must read

In case you're experiencing your ibdata1 is getting bigger, read this wonderful post by my colleague Miguel at Percona. http://www.mysqlperformanceblog.com/2013/08/20/why-is-the-ibdata1-file-continuously-growing-in-mysql/

Great info to learn on determining IOPS required for Horde

Determining disk drives if you know the IOPS required If you can make an estimate of how many I/O's Per Second (IOPS) you need you can determine how many disks you'll need as RAID10 and how many disks you'll need as RAID5. Generally, a single disk drive can do this many IOPS per disk: 15k rpm: 180-210 IOPS 10k rpm: 130-150 IOPS 7200 rpm: 80-100 IOPS 5400 rpm: 50-80 IOPS In a mirrored configuration: Disk IOPS = Read IOPS + (2 * Write IOPS) In a parity (RAID5) configuration: Disk IOPS = Read IOPS + (4 * Write IOPS) Example calculations Now let's look at an example. If you estimate that you need to support 40 Read IOPS (40 reads/sec) and 80 Write IOPS (80 writes/sec). If you want to use a mirrored configuration of drives: Disk IOPS = Read IOPS + (2 * Write IOPS) = 40r/s + (2 * 80w/s) = 200 Disk IOPSUsing 7200 rpm drives, you need: 200 / 50 = 4 disk drives Using 10k rpm drives, you need: 200 / 130 = 2 disk drives (always round up) If you want

Linux: checking the parent id and its threads

In Linux, there variety of tools you can use to check on the processes, but most of these are just the same. One of them is ps or you can use htop. In the following, I'm using ps just to show the processes and threads spawn by the parent process to create the threads processes. [root@centos ~]# ps -p `pidof mysqlslap` -Lf UID        PID  PPID   LWP  C NLWP STIME TTY          TIME CMD root      6615 12502  6615  0    6 09:43 pts/1    00:00:00 mysqlslap --concurrency=5 --iterations=10 --query=select * from AddressCode; --user=root root      6615 12502  6783  1    6 10:09 pts/1    00:00:00 mysqlslap --concurrency=5 --iterations=10 --query=select * from AddressCode; --user=root root      6615 12502  6784  1    6 10:09 pts/1    00:00:00 mysqlslap --concurrency=5 --iterations=10 --query=select * from AddressCode; --user=root root      6615 12502  6785  1    6 10:09 pts/1    00:00:00 mysqlslap --concurrency=5 --iterations=10 --query=select * from AddressCode; --user=root root  

Converting sectors into MB - Useful in understanding the sectors in iostat in Linux

Supposed you have a sector with 512 Bytes equivalent, then the formula will be just below 1 sect. 1024 Bytes 1024 KB --------- x ---------- x --------- = 2048 sect./ MB 512 Bytes 1 KB 1 MB i.e. if a sector has a 512 Bytes, commonly in a hard disk, so which means that a 1024 Bytes == 2 sector of a KiloByte. So if 2 sector is equivalent to a KiloByte (1024 Bytes), if you do the math, 2 ** 1024 / 512.0 = 2048. This is useful only understanding the iostat process in Linux.

Helpful resources for gdb tool

I just wanted to share these links, http://poormansprofiler.org/ http://dom.as/2009/02/15/poor-mans-contention-profiling/

Oracle: MongoDB NoSQL Cluster Using Oracle Solaris Zones

This is just a very short blog which I am interested about. Please check, How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

iOS: Using blocks as a variable

If you're into iOS programming and has a background on other programming languages like Java, Javascript, PHP, or Python, these languages have their own style of defining lambda or closures which you can assign to a specific variable and then call that variable as a method or function per se. I just wanted to take notes on this so I might not forget, and well share this with you. Below is a snippet part of the game I wrote which looks like this,     void (^ playComeAndRead )( BOOL )   = ^( BOOL finished ) {         double delayInSeconds = 0.3 ;        dispatch_time_t popTime = dispatch_time ( DISPATCH_TIME_NOW, delayInSeconds * NSEC_PER_SEC ) ;        dispatch_after ( popTime, dispatch_get_main_queue () , ^( void ){             if ( _audioPlayer ) {                _audioPlayer = nil ;             }                       audioString = @"Come And Read" ;            audioURL = [[ NSBundle mainBundle ] URLForResource

What is interleaving?

Interleaving is the technique to avoid another full rotation of a disk when retrieving a data from a sector in disk. So for example, if sectors are being interleaved, this means that if they are stored, basically, into a factor of 1:4, if the next data to be retrieved is on sector 5 from sector 1, if sectors are arranged like this,  1 8 6 4 2 9 7 5 3 Then it don't need anymore to wait for another spindle or full revolution of the disk. This is what Wikipedia has explained, Information is commonly stored on disk storage in very small pieces referred to as sectors or blocks. These are arranged in concentric rings referred to as tracks across the surface of each disk. While it may seem easiest to order these blocks in direct serial order in each track, such as 1 2 3 4 5 6 7 8 9, for early computing devices this ordering was not practical. Data to be written or read is put into a special region of reusable memory referred to as a buffer. When data needed to be w

How Does Parity means in RAID?

I just wanted to note myself about this so I can't forget. From Wikipedia, parity bit for raid works by giving an assurance that your data will be repaired using XOR logical operation. See below, Parity data is used by some RAID levels to achieve redundancy. If a drive in the array fails, remaining data on the other drives can be combined with the parity data (using the Boolean XOR function) to reconstruct the missing data. For example, suppose two drives in a three-drive RAID 5 array contained the following data: Drive 1: 01101101 Drive 2: 11010100 To calculate parity data for the two drives, an XOR is performed on their data:         01101101 XOR 11010100 _____________         10111001 The resulting parity data, 10111001 , is then stored on Drive 3. Should any of the three drives fail, the contents of the failed drive can be reconstructed on a replacement drive by subjecting the data from the remaining drives to the same XOR operation. If Drive 2 were

SSH Login is slow!

I just found out this solution when I was playing with sshd_config. Anyway, I'm connecting locally with my another machine and I just can't let this latency and waiting for me until I'm logged in which is a little bit annoying. To fix this, edit your /etc/sshd_config, but make sure you create a backup. cp /etc/sshd_config /etc/sshd_config.backup vi /etc/sshd_config Then add these lines, UseDNS no Compression yes You can either not include Compression but it might help as ssh compression means, -C Requests compression of all data (including stdin, stdout, stderr, and data for forwarded X11 and TCP connections). The compression algorithm is the same used by gzip(1) , and the ``level'' can be controlled by the CompressionLevel option for protocol version 1. Compression is desir- able on modem lines and other slow connections, but will only slow down things on fast net- works. The defa

Extending disk size of your Virtual Machine in Virtual Box in Mac OS X (Linux)

So this was my problem: I have a CentOS 6.4 which I just realized I cannot extend my database anymore because I'm running out of disk space. So let's go by giving a solution to our problem. In Mac OS X, if you go to to VirtualBox application package, there is a file called  VBoxManage  which is a utility. #> ls -al /Applications/VirtualBox.app/Contents/MacOS/ Display all 102 possibilities? (y or n) ExtensionPacks/                       VBoxDDR0.r0.codesign                  VBoxNetDHCP.dylib                     VMMGC.gc-x86 UserManual.pdf                        VBoxDDU.dylib                         VBoxOGLhostcrutil.dylib               VMMGC.gc-x86.codesign VBoxAuth.dylib                        VBoxDbg.dylib                         VBoxOGLhosterrorspu.dylib             VMMR0.r0 VBoxAuthSimple.dylib                  VBoxDragAndDropSvc.dylib              VBoxOGLrenderspu.dylib                VMMR0.r0.codesign VBoxAutostart                         VBoxEFI32.fd