Issue here is there are some files on local disk that are breaking a PHP application that uses them (points to them). The names of the files need to be changed a little as they do not comply with a naming convention, I could easily do this manually using mv or cp however there are many files and single commands would be too laborious.
Ok first things first, I need to get the names of the files from the server and create a sandbox environment on my own Linux box. Once all testing is done I will run the script on the corporate machine.
Creating an environment by duplicating the original file system
Went to the location of the files and backed up the data to a tar zip. This way I can roll back quickly if needed.
tar -zcvf backup_name.tar.gz folder_to_be_backed_up/
I did an ls of the directory and then cut and pasted the results into a text file (theFile.txt) on my sandbox machine (cut and paste using gedit), once I had a text file with the entire directory listing (one file on each line) I ran the following command.
#!/bin/bash while read filename; do `touch $filename` done < theFile.txt
Ok so on the sandbox machine I wrote the following shell script and tested it, this script essentially does a string replace on the explicit part of the file name I asked to match and replaces it with the new string eg ‘s/needle/newNeedle/’ can change a file called the_needle_001.mp3 to the_newNeedle_001.mp3.
#!/bin/bash for f in test do new=$(echo $f | sed -e 's/Original_Name/ChangedName/') cp -R $f $new done
This will create a new set of files with the new name and leave the old names in place.