Job ID = 1295412 sra ファイルのダウンロード中... Read layout: SINGLE fastq に変換中... spots read : 17,232,507 reads read : 17,232,507 reads written : 17,232,507 rm: cannot remove ‘[DSE]RR*’: No such file or directory rm: cannot remove ‘fastqDump_tmp*’: No such file or directory fastq に変換しました。 bowtie でマッピング中... Time loading reference: 00:00:00 Time loading forward index: 00:00:00 Time loading mirror index: 00:00:00 Multiseed full-index search: 00:06:42 17232507 reads; of these: 17232507 (100.00%) were unpaired; of these: 207581 (1.20%) aligned 0 times 12449031 (72.24%) aligned exactly 1 time 4575895 (26.55%) aligned >1 times 98.80% overall alignment rate Time searching: 00:06:42 Overall time: 00:06:42 マッピングが完了しました。 samtools でBAM に変換中... [samopen] SAM header is present: 15 sequences. [bam_sort_core] merging from 8 files... [bam_rmdupse_core] 3297504 / 17024926 = 0.1937 in library ' ' BAM に変換しました。 Bed ファイルを作成中... BedGraph に変換中... INFO @ Mon, 03 Jun 2019 13:51:28: # Command line: callpeak -t /home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.bam -f BAM -g dm -n /home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.20 -q 1e-20 # ARGUMENTS LIST: # name = /home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.20 # format = BAM # ChIP-seq file = ['/home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.bam'] # control file = None # effective genome size = 1.20e+08 # band width = 300 # model fold = [5, 50] # qvalue cutoff = 1.00e-20 # Larger dataset will be scaled towards smaller dataset. # Range for calculating regional lambda is: 10000 bps # Broad region calling is off # Paired-End mode is off INFO @ Mon, 03 Jun 2019 13:51:28: #1 read tag files... INFO @ Mon, 03 Jun 2019 13:51:28: #1 read treatment tags... INFO @ Mon, 03 Jun 2019 13:51:29: # Command line: callpeak -t /home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.bam -f BAM -g dm -n /home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.05 -q 1e-05 # ARGUMENTS LIST: # name = /home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.05 # format = BAM # ChIP-seq file = ['/home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.bam'] # control file = None # effective genome size = 1.20e+08 # band width = 300 # model fold = [5, 50] # qvalue cutoff = 1.00e-05 # Larger dataset will be scaled towards smaller dataset. # Range for calculating regional lambda is: 10000 bps # Broad region calling is off # Paired-End mode is off INFO @ Mon, 03 Jun 2019 13:51:29: #1 read tag files... INFO @ Mon, 03 Jun 2019 13:51:29: #1 read treatment tags... INFO @ Mon, 03 Jun 2019 13:51:29: # Command line: callpeak -t /home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.bam -f BAM -g dm -n /home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.10 -q 1e-10 # ARGUMENTS LIST: # name = /home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.10 # format = BAM # ChIP-seq file = ['/home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.bam'] # control file = None # effective genome size = 1.20e+08 # band width = 300 # model fold = [5, 50] # qvalue cutoff = 1.00e-10 # Larger dataset will be scaled towards smaller dataset. # Range for calculating regional lambda is: 10000 bps # Broad region calling is off # Paired-End mode is off INFO @ Mon, 03 Jun 2019 13:51:29: #1 read tag files... INFO @ Mon, 03 Jun 2019 13:51:29: #1 read treatment tags... INFO @ Mon, 03 Jun 2019 13:51:36: 1000000 INFO @ Mon, 03 Jun 2019 13:51:36: 1000000 INFO @ Mon, 03 Jun 2019 13:51:37: 1000000 INFO @ Mon, 03 Jun 2019 13:51:44: 2000000 INFO @ Mon, 03 Jun 2019 13:51:44: 2000000 INFO @ Mon, 03 Jun 2019 13:51:45: 2000000 INFO @ Mon, 03 Jun 2019 13:51:51: 3000000 INFO @ Mon, 03 Jun 2019 13:51:52: 3000000 INFO @ Mon, 03 Jun 2019 13:51:53: 3000000 INFO @ Mon, 03 Jun 2019 13:51:58: 4000000 INFO @ Mon, 03 Jun 2019 13:51:59: 4000000 INFO @ Mon, 03 Jun 2019 13:52:02: 4000000 INFO @ Mon, 03 Jun 2019 13:52:05: 5000000 INFO @ Mon, 03 Jun 2019 13:52:06: 5000000 INFO @ Mon, 03 Jun 2019 13:52:11: 5000000 INFO @ Mon, 03 Jun 2019 13:52:12: 6000000 INFO @ Mon, 03 Jun 2019 13:52:12: 6000000 INFO @ Mon, 03 Jun 2019 13:52:19: 6000000 INFO @ Mon, 03 Jun 2019 13:52:19: 7000000 INFO @ Mon, 03 Jun 2019 13:52:20: 7000000 INFO @ Mon, 03 Jun 2019 13:52:26: 8000000 INFO @ Mon, 03 Jun 2019 13:52:27: 8000000 INFO @ Mon, 03 Jun 2019 13:52:27: 7000000 INFO @ Mon, 03 Jun 2019 13:52:33: 9000000 INFO @ Mon, 03 Jun 2019 13:52:34: 9000000 INFO @ Mon, 03 Jun 2019 13:52:35: 8000000 INFO @ Mon, 03 Jun 2019 13:52:41: 10000000 INFO @ Mon, 03 Jun 2019 13:52:41: 10000000 INFO @ Mon, 03 Jun 2019 13:52:43: 9000000 INFO @ Mon, 03 Jun 2019 13:52:48: 11000000 INFO @ Mon, 03 Jun 2019 13:52:48: 11000000 INFO @ Mon, 03 Jun 2019 13:52:51: 10000000 INFO @ Mon, 03 Jun 2019 13:52:55: 12000000 INFO @ Mon, 03 Jun 2019 13:52:55: 12000000 INFO @ Mon, 03 Jun 2019 13:52:59: 11000000 INFO @ Mon, 03 Jun 2019 13:53:01: 13000000 INFO @ Mon, 03 Jun 2019 13:53:02: 13000000 INFO @ Mon, 03 Jun 2019 13:53:07: 12000000 INFO @ Mon, 03 Jun 2019 13:53:07: #1 tag size is determined as 44 bps INFO @ Mon, 03 Jun 2019 13:53:07: #1 tag size = 44 INFO @ Mon, 03 Jun 2019 13:53:07: #1 total tags in treatment: 13727422 INFO @ Mon, 03 Jun 2019 13:53:07: #1 user defined the maximum tags... INFO @ Mon, 03 Jun 2019 13:53:07: #1 filter out redundant tags at the same location and the same strand by allowing at most 1 tag(s) INFO @ Mon, 03 Jun 2019 13:53:07: #1 tags after filtering in treatment: 13727422 INFO @ Mon, 03 Jun 2019 13:53:07: #1 Redundant rate of treatment: 0.00 INFO @ Mon, 03 Jun 2019 13:53:07: #1 finished! INFO @ Mon, 03 Jun 2019 13:53:07: #2 Build Peak Model... INFO @ Mon, 03 Jun 2019 13:53:07: #2 looking for paired plus/minus strand peaks... INFO @ Mon, 03 Jun 2019 13:53:07: #1 tag size is determined as 44 bps INFO @ Mon, 03 Jun 2019 13:53:07: #1 tag size = 44 INFO @ Mon, 03 Jun 2019 13:53:07: #1 total tags in treatment: 13727422 INFO @ Mon, 03 Jun 2019 13:53:07: #1 user defined the maximum tags... INFO @ Mon, 03 Jun 2019 13:53:07: #1 filter out redundant tags at the same location and the same strand by allowing at most 1 tag(s) INFO @ Mon, 03 Jun 2019 13:53:08: #1 tags after filtering in treatment: 13727422 INFO @ Mon, 03 Jun 2019 13:53:08: #1 Redundant rate of treatment: 0.00 INFO @ Mon, 03 Jun 2019 13:53:08: #1 finished! INFO @ Mon, 03 Jun 2019 13:53:08: #2 Build Peak Model... INFO @ Mon, 03 Jun 2019 13:53:08: #2 looking for paired plus/minus strand peaks... INFO @ Mon, 03 Jun 2019 13:53:08: #2 number of paired peaks: 54 WARNING @ Mon, 03 Jun 2019 13:53:08: Too few paired peaks (54) so I can not build the model! Broader your MFOLD range parameter may erase this error. If it still can't build the model, we suggest to use --nomodel and --extsize 147 or other fixed number instead. WARNING @ Mon, 03 Jun 2019 13:53:08: Process for pairing-model is terminated! cut: /home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.20_peaks.narrowPeak: No such file or directory pass1 - making usageList (0 chroms): 2 millis needLargeMem: trying to allocate 0 bytes (limit: 17179869184) rm: cannot remove ‘/home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.20_model.r’: No such file or directory rm: cannot remove ‘/home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.20_*.xls’: No such file or directory rm: cannot remove ‘/home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.20_peaks.narrowPeak’: No such file or directory CompletedMACS2peakCalling INFO @ Mon, 03 Jun 2019 13:53:09: #2 number of paired peaks: 54 WARNING @ Mon, 03 Jun 2019 13:53:09: Too few paired peaks (54) so I can not build the model! Broader your MFOLD range parameter may erase this error. If it still can't build the model, we suggest to use --nomodel and --extsize 147 or other fixed number instead. WARNING @ Mon, 03 Jun 2019 13:53:09: Process for pairing-model is terminated! cut: /home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.10_peaks.narrowPeak: No such file or directory pass1 - making usageList (0 chroms): 1 millis needLargeMem: trying to allocate 0 bytes (limit: 17179869184) rm: cannot remove ‘/home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.10_model.r’: No such file or directory rm: cannot remove ‘/home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.10_*.xls’: No such file or directory rm: cannot remove ‘/home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.10_peaks.narrowPeak’: No such file or directory CompletedMACS2peakCalling INFO @ Mon, 03 Jun 2019 13:53:14: 13000000 INFO @ Mon, 03 Jun 2019 13:53:20: #1 tag size is determined as 44 bps INFO @ Mon, 03 Jun 2019 13:53:20: #1 tag size = 44 INFO @ Mon, 03 Jun 2019 13:53:20: #1 total tags in treatment: 13727422 INFO @ Mon, 03 Jun 2019 13:53:20: #1 user defined the maximum tags... INFO @ Mon, 03 Jun 2019 13:53:20: #1 filter out redundant tags at the same location and the same strand by allowing at most 1 tag(s) INFO @ Mon, 03 Jun 2019 13:53:21: #1 tags after filtering in treatment: 13727422 INFO @ Mon, 03 Jun 2019 13:53:21: #1 Redundant rate of treatment: 0.00 INFO @ Mon, 03 Jun 2019 13:53:21: #1 finished! INFO @ Mon, 03 Jun 2019 13:53:21: #2 Build Peak Model... INFO @ Mon, 03 Jun 2019 13:53:21: #2 looking for paired plus/minus strand peaks... INFO @ Mon, 03 Jun 2019 13:53:22: #2 number of paired peaks: 54 WARNING @ Mon, 03 Jun 2019 13:53:22: Too few paired peaks (54) so I can not build the model! Broader your MFOLD range parameter may erase this error. If it still can't build the model, we suggest to use --nomodel and --extsize 147 or other fixed number instead. WARNING @ Mon, 03 Jun 2019 13:53:22: Process for pairing-model is terminated! cut: /home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.05_peaks.narrowPeak: No such file or directory pass1 - making usageList (0 chroms): 2 millis needLargeMem: trying to allocate 0 bytes (limit: 17179869184) rm: cannot remove ‘/home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.05_model.r’: No such file or directory rm: cannot remove ‘/home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.05_*.xls’: No such file or directory rm: cannot remove ‘/home/okishinya/chipatlas/results/dm3/SRX336286/SRX336286.05_peaks.narrowPeak’: No such file or directory CompletedMACS2peakCalling BedGraph に変換しました。 BigWig に変換中... BigWig に変換しました。