Using mknod during export

Every Friday we have taken the full export backup of one our database.This database is used for reporting purpose.Export dump size is around 25 GB. Backup mount point total space is 32 GB. Every weekend I have taken the backup and transfer the export dump (25 GB) to bridge server.

In Bridge server I have zipped the dump file and again transfer the zip file to Backup server. System support group taken the backup into tape.During this activity takes more time. So I have planned to zip the export dump file on same server.I have escalated mail to concern person to allocate the more space to Backup mount point.System support group couldn't allocate the space due to insufficient space in disk.At that time my lead told to me raja use the mknod. I am not aware of the mknod.

Now we see the export backup using mknod.

File Name: exppipe.sh

#!/bin/sh. $HOME/.bash_profilecd /home/oracle/dbatest/raja/mknode/mknod exp_pipe pgzip -cNf exp_data.dmp.gz &exp demo/demo file=exp_pipe log=exp_data.log owner=demo statistics=nonerm -f exp_pipe pfi

File Name: imppipe.sh

#!/bin/sh. $HOME/.bash_profilecd /home/oracle/dbatest/raja/mknode/mknod import_pipe pgunzip -c exp_data.dmp.gz > import_pipe &imp testpipe/testpipe file=import_pipe log=imp_data.log fromuser=demo touser=testpipe statistics=none commit=yrm -f import_pipe pfi

Note

. $HOME/.bash_profile is environment variable file

Instead of doing the export and zip separately, creating an interim dump file (or doing unzip and import), unix has the ability to pipe the output from one program (such as exp) as input to another program (such as gzip) as they both run in parallel, without having to run the programs sequentially and without having to create interim files.

I Hope this article helped you to export using mknod. Suggestions are welcome.

Help Us Grow

If you like this post, please share it with your friends.

You are free to copy and redistribute this article in any medium or format, as long as you keep the links in the articles or provide a link back to this page.

Subscribe to our mailing list