# copying folders in terminal



## supanatral (Jun 10, 2008)

I'm trying to copy a folder in terminal with all the contents inside of it but when I use the -r option, it just copy's the files within the folder to the destination so it doesn't create a new folder in the destination, it just copy's the individual files there.

Also, some of these files may be corrupt but is there a way that it will notify you if it is corupt?


----------



## ElDiabloConCaca (Jun 10, 2008)

You should do this:


```
cp -r [source folder] [destination directory]/[foldername]/
```

Notice that you need to put the foldername in the destination as well.  If the folder does not exist, it will be created.  Also note the trailing slash.

The 'cp' command does not test for "corrupt" files, since files are nothing more than binary bits... how would a copy operation detect that a bit is supposed to be a 1 instead of a 0?  "Corrupt" file could mean anything -- a PhotoShop file that has a 1px black line through it may be "corrupt" to you, but it's just a bunch of bits to the computer and the computer doesn't know that line isn't supposed to be there.


----------



## supanatral (Jun 10, 2008)

Ok, well that makes sense, very good point.  Thanks for your help


----------



## supanatral (Jun 10, 2008)

One last question about copying files. If I login to the mac remotely using SSH and start copying files, does it continue to copy after I log out of SSH?


----------



## ElDiabloConCaca (Jun 10, 2008)

Nope, unless you launch the process as a background process.


----------



## supanatral (Jun 10, 2008)

First of all I need to say: Man I love this site! I get much more qualified people to answer my questions then even AppleCare and I get a answer sooner then AppleCare because of the usual wait time.

How can I make it a background process?


----------



## wraith (Jun 10, 2008)

You can put a "&" at the end of the command.

ie:  cp * /stuff &

to bring it back to the foreground, type "fg"


----------



## Viro (Jun 10, 2008)

wraith said:


> You can put a "&" at the end of the command.
> 
> ie:  cp * /stuff &
> 
> to bring it back to the foreground, type "fg"



As far as I know, that will not work if you log out of SSH before the copy is complete.


----------



## ElDiabloConCaca (Jun 10, 2008)

Viro said:


> As far as I know, that will not work if you log out of SSH before the copy is complete.



The background copy will work and will continue to completion (or error), but the "fg" command will not, since your new SSH session has no background processes.  In essence, you cannot "log back into" an SSH session you've already logged out of.  If you SSH back into the machine after doing both the background copy and log out, then you're pretty much relegated to simply waiting until the copy finishes, or killing it with a kill command.

You can use the UNIX/Linux utility "Screen" to get around this.  A little complicated to learn, but well worth it if you intend on performing actions like this often.


----------



## Viro (Jun 10, 2008)

Is that really true? I remember running some build scripts through SSH and I remember postfixing the build command with & hoping to make it run in the background when I logged out. Imagine my disappointment when the very next day I logged in to see that the build had stopped where I logged out.


----------



## ElDiabloConCaca (Jun 10, 2008)

Strange... we just tried it here with a bash shell script that looks like this:

```
#!/bin/sh
sleep 15
/usr/bin/dostuff.php
```
...which has a call to the program "dostuff.php" which is NOT backgrounded (all it does is mail a simple email to me).  We then executed the script remotely via an SSH session, logged out within the 15 second window the script gave us, then monitored the process from a different machine.  The script ran to completion, and I got the email.


----------



## ElDiabloConCaca (Jun 10, 2008)

Just thought of a situation where logging out would kill a background process, and that's precisely what you're describing, Viro... if you have a process that reads from stdin or outputs to stdout, then you MUST redirect stdin and stdout to somewhere else (files, maybe), because when you log out, stdin and stdout pipes are closed, and then the program would fail.

Since compilers like cc and gcc output to stdout and stderr by default, when you log out, then next write to stdout or stderr would fail, and bring the script to a screeching halt.

If you re-route stdin and stdout (and, possibly, stderr) early in the script, backgrounding your compile script then logging out should allow it to continue running.


----------



## Viro (Jun 11, 2008)

Oh cool, that's probably it. I should have redirected output to a file somewhere. I never thought of that!


----------



## mvcube (Jun 11, 2008)

You have to use the "nohup" command to detach the cp process from the terminal. "man nohup" will tell you more.

"nohup" means "do not react on the HUP (hang up) signal!".

The output will be redirected automatically to a file named "nohup.out".


----------



## Giaguara (Jun 11, 2008)

And you could use cpmac instead of cp to also copy the resource forks over.


----------



## firebird (Aug 22, 2012)

I like this forum!
I knew this is an apple forum, but the terminal language is so simulair that i can use the most things explained over here!
Only typical apple things will not work!
And ones i have my apple, i can use the terminal like i used in Linux, course it is as good as the same!


----------



## Satcomer (Aug 23, 2012)

firebird said:


> I like this forum!
> I knew this is an apple forum, but the terminal language is so simulair that i can use the most things explained over here!
> Only typical apple things will not work!
> And ones i have my apple, i can use the terminal like i used in Linux, course it is as good as the same!



Maybe you will like MacOSXHints.com. They have been posting good OSX Hints for over ten years now so you can find many tips that will blow your mind.


----------

