I have a shell script which
- shuffles a large text file (6 million rows and 6 columns)
- sorts the file based the first column
- outputs 1000 files
So the pseudocode looks like this
file1.sh
#!/bin/bash
for i in $(seq 1 1000)
do
Generating random numbers here , sorting and outputting to file$i.txt
done
Is there a way to run this shell script in parallel
to make full use of multi-core CPUs?
At the moment, ./file1.sh
executes in sequence 1 to 1000 runs and it is very slow.
Thanks for your help.
See Question&Answers more detail:os