Job ID = 1294542 sra ファイルのダウンロード中... Read layout: SINGLE fastq に変換中... 2019-06-02T23:48:01 fasterq-dump.2.9.6 sys: timeout exhausted while reading file within network system module - mbedtls_ssl_read returned -76 ( NET - Reading information from the socket failed ) 2019-06-02T23:48:01 fasterq-dump.2.9.6 sys: timeout exhausted while reading file within network system module - mbedtls_ssl_read returned -76 ( NET - Reading information from the socket failed ) 2019-06-02T23:48:01 fasterq-dump.2.9.6 sys: timeout exhausted while reading file within network system module - mbedtls_ssl_read returned -76 ( NET - Reading information from the socket failed ) 2019-06-02T23:48:01 fasterq-dump.2.9.6 sys: timeout exhausted while reading file within network system module - mbedtls_ssl_read returned -76 ( NET - Reading information from the socket failed ) spots read : 18,493,128 reads read : 18,493,128 reads written : 18,493,128 rm: cannot remove ‘[DSE]RR*’: No such file or directory rm: cannot remove ‘fastqDump_tmp*’: No such file or directory fastq に変換しました。 bowtie でマッピング中... Time loading reference: 00:00:00 Time loading forward index: 00:00:00 Time loading mirror index: 00:00:00 Multiseed full-index search: 00:07:08 18493128 reads; of these: 18493128 (100.00%) were unpaired; of these: 1125010 (6.08%) aligned 0 times 13121052 (70.95%) aligned exactly 1 time 4247066 (22.97%) aligned >1 times 93.92% overall alignment rate Time searching: 00:07:08 Overall time: 00:07:08 マッピングが完了しました。 samtools でBAM に変換中... [samopen] SAM header is present: 15 sequences. [bam_sort_core] merging from 8 files... [bam_rmdupse_core] 1482230 / 17368118 = 0.0853 in library ' ' BAM に変換しました。 Bed ファイルを作成中... BedGraph に変換中... INFO @ Mon, 03 Jun 2019 09:07:31: # Command line: callpeak -t /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.bam -f BAM -g dm -n /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.10 -q 1e-10 # ARGUMENTS LIST: # name = /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.10 # format = BAM # ChIP-seq file = ['/home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.bam'] # control file = None # effective genome size = 1.20e+08 # band width = 300 # model fold = [5, 50] # qvalue cutoff = 1.00e-10 # Larger dataset will be scaled towards smaller dataset. # Range for calculating regional lambda is: 10000 bps # Broad region calling is off # Paired-End mode is off INFO @ Mon, 03 Jun 2019 09:07:31: #1 read tag files... INFO @ Mon, 03 Jun 2019 09:07:31: #1 read treatment tags... INFO @ Mon, 03 Jun 2019 09:07:31: # Command line: callpeak -t /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.bam -f BAM -g dm -n /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.20 -q 1e-20 # ARGUMENTS LIST: # name = /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.20 # format = BAM # ChIP-seq file = ['/home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.bam'] # control file = None # effective genome size = 1.20e+08 # band width = 300 # model fold = [5, 50] # qvalue cutoff = 1.00e-20 # Larger dataset will be scaled towards smaller dataset. # Range for calculating regional lambda is: 10000 bps # Broad region calling is off # Paired-End mode is off INFO @ Mon, 03 Jun 2019 09:07:31: #1 read tag files... INFO @ Mon, 03 Jun 2019 09:07:31: #1 read treatment tags... INFO @ Mon, 03 Jun 2019 09:07:32: # Command line: callpeak -t /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.bam -f BAM -g dm -n /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.05 -q 1e-05 # ARGUMENTS LIST: # name = /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.05 # format = BAM # ChIP-seq file = ['/home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.bam'] # control file = None # effective genome size = 1.20e+08 # band width = 300 # model fold = [5, 50] # qvalue cutoff = 1.00e-05 # Larger dataset will be scaled towards smaller dataset. # Range for calculating regional lambda is: 10000 bps # Broad region calling is off # Paired-End mode is off INFO @ Mon, 03 Jun 2019 09:07:32: #1 read tag files... INFO @ Mon, 03 Jun 2019 09:07:32: #1 read treatment tags... INFO @ Mon, 03 Jun 2019 09:07:41: 1000000 INFO @ Mon, 03 Jun 2019 09:07:42: 1000000 INFO @ Mon, 03 Jun 2019 09:07:44: 1000000 INFO @ Mon, 03 Jun 2019 09:07:52: 2000000 INFO @ Mon, 03 Jun 2019 09:07:53: 2000000 INFO @ Mon, 03 Jun 2019 09:07:57: 2000000 INFO @ Mon, 03 Jun 2019 09:08:01: 3000000 INFO @ Mon, 03 Jun 2019 09:08:03: 3000000 INFO @ Mon, 03 Jun 2019 09:08:09: 3000000 INFO @ Mon, 03 Jun 2019 09:08:11: 4000000 INFO @ Mon, 03 Jun 2019 09:08:13: 4000000 INFO @ Mon, 03 Jun 2019 09:08:20: 5000000 INFO @ Mon, 03 Jun 2019 09:08:22: 4000000 INFO @ Mon, 03 Jun 2019 09:08:23: 5000000 INFO @ Mon, 03 Jun 2019 09:08:30: 6000000 INFO @ Mon, 03 Jun 2019 09:08:33: 6000000 INFO @ Mon, 03 Jun 2019 09:08:33: 5000000 INFO @ Mon, 03 Jun 2019 09:08:40: 7000000 INFO @ Mon, 03 Jun 2019 09:08:44: 7000000 INFO @ Mon, 03 Jun 2019 09:08:45: 6000000 INFO @ Mon, 03 Jun 2019 09:08:51: 8000000 INFO @ Mon, 03 Jun 2019 09:08:55: 8000000 INFO @ Mon, 03 Jun 2019 09:08:57: 7000000 INFO @ Mon, 03 Jun 2019 09:09:02: 9000000 INFO @ Mon, 03 Jun 2019 09:09:05: 9000000 INFO @ Mon, 03 Jun 2019 09:09:08: 8000000 INFO @ Mon, 03 Jun 2019 09:09:12: 10000000 INFO @ Mon, 03 Jun 2019 09:09:15: 10000000 INFO @ Mon, 03 Jun 2019 09:09:20: 9000000 INFO @ Mon, 03 Jun 2019 09:09:22: 11000000 INFO @ Mon, 03 Jun 2019 09:09:25: 11000000 INFO @ Mon, 03 Jun 2019 09:09:31: 10000000 INFO @ Mon, 03 Jun 2019 09:09:32: 12000000 INFO @ Mon, 03 Jun 2019 09:09:35: 12000000 INFO @ Mon, 03 Jun 2019 09:09:42: 11000000 INFO @ Mon, 03 Jun 2019 09:09:42: 13000000 INFO @ Mon, 03 Jun 2019 09:09:46: 13000000 INFO @ Mon, 03 Jun 2019 09:09:53: 12000000 INFO @ Mon, 03 Jun 2019 09:09:54: 14000000 INFO @ Mon, 03 Jun 2019 09:09:57: 14000000 INFO @ Mon, 03 Jun 2019 09:10:04: 13000000 INFO @ Mon, 03 Jun 2019 09:10:04: 15000000 INFO @ Mon, 03 Jun 2019 09:10:07: 15000000 INFO @ Mon, 03 Jun 2019 09:10:13: #1 tag size is determined as 50 bps INFO @ Mon, 03 Jun 2019 09:10:13: #1 tag size = 50 INFO @ Mon, 03 Jun 2019 09:10:13: #1 total tags in treatment: 15885888 INFO @ Mon, 03 Jun 2019 09:10:13: #1 user defined the maximum tags... INFO @ Mon, 03 Jun 2019 09:10:13: #1 filter out redundant tags at the same location and the same strand by allowing at most 1 tag(s) INFO @ Mon, 03 Jun 2019 09:10:13: #1 tags after filtering in treatment: 15885888 INFO @ Mon, 03 Jun 2019 09:10:13: #1 Redundant rate of treatment: 0.00 INFO @ Mon, 03 Jun 2019 09:10:13: #1 finished! INFO @ Mon, 03 Jun 2019 09:10:13: #2 Build Peak Model... INFO @ Mon, 03 Jun 2019 09:10:13: #2 looking for paired plus/minus strand peaks... INFO @ Mon, 03 Jun 2019 09:10:15: #2 number of paired peaks: 156 WARNING @ Mon, 03 Jun 2019 09:10:15: Fewer paired peaks (156) than 1000! Model may not be build well! Lower your MFOLD parameter may erase this warning. Now I will use 156 pairs to build model! INFO @ Mon, 03 Jun 2019 09:10:15: start model_add_line... INFO @ Mon, 03 Jun 2019 09:10:15: start X-correlation... INFO @ Mon, 03 Jun 2019 09:10:15: end of X-cor INFO @ Mon, 03 Jun 2019 09:10:15: #2 finished! INFO @ Mon, 03 Jun 2019 09:10:15: #2 predicted fragment length is 43 bps INFO @ Mon, 03 Jun 2019 09:10:15: #2 alternative fragment length(s) may be 43 bps INFO @ Mon, 03 Jun 2019 09:10:15: #2.2 Generate R script for model : /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.10_model.r WARNING @ Mon, 03 Jun 2019 09:10:15: #2 Since the d (43) calculated from paired-peaks are smaller than 2*tag length, it may be influenced by unknown sequencing problem! WARNING @ Mon, 03 Jun 2019 09:10:15: #2 You may need to consider one of the other alternative d(s): 43 WARNING @ Mon, 03 Jun 2019 09:10:15: #2 You can restart the process with --nomodel --extsize XXX with your choice or an arbitrary number. Nontheless, MACS will continute computing. INFO @ Mon, 03 Jun 2019 09:10:15: #3 Call peaks... INFO @ Mon, 03 Jun 2019 09:10:15: #3 Pre-compute pvalue-qvalue table... INFO @ Mon, 03 Jun 2019 09:10:15: 14000000 INFO @ Mon, 03 Jun 2019 09:10:16: #1 tag size is determined as 50 bps INFO @ Mon, 03 Jun 2019 09:10:16: #1 tag size = 50 INFO @ Mon, 03 Jun 2019 09:10:16: #1 total tags in treatment: 15885888 INFO @ Mon, 03 Jun 2019 09:10:16: #1 user defined the maximum tags... INFO @ Mon, 03 Jun 2019 09:10:16: #1 filter out redundant tags at the same location and the same strand by allowing at most 1 tag(s) INFO @ Mon, 03 Jun 2019 09:10:16: #1 tags after filtering in treatment: 15885888 INFO @ Mon, 03 Jun 2019 09:10:16: #1 Redundant rate of treatment: 0.00 INFO @ Mon, 03 Jun 2019 09:10:16: #1 finished! INFO @ Mon, 03 Jun 2019 09:10:16: #2 Build Peak Model... INFO @ Mon, 03 Jun 2019 09:10:16: #2 looking for paired plus/minus strand peaks... INFO @ Mon, 03 Jun 2019 09:10:18: #2 number of paired peaks: 156 WARNING @ Mon, 03 Jun 2019 09:10:18: Fewer paired peaks (156) than 1000! Model may not be build well! Lower your MFOLD parameter may erase this warning. Now I will use 156 pairs to build model! INFO @ Mon, 03 Jun 2019 09:10:18: start model_add_line... INFO @ Mon, 03 Jun 2019 09:10:18: start X-correlation... INFO @ Mon, 03 Jun 2019 09:10:18: end of X-cor INFO @ Mon, 03 Jun 2019 09:10:18: #2 finished! INFO @ Mon, 03 Jun 2019 09:10:18: #2 predicted fragment length is 43 bps INFO @ Mon, 03 Jun 2019 09:10:18: #2 alternative fragment length(s) may be 43 bps INFO @ Mon, 03 Jun 2019 09:10:18: #2.2 Generate R script for model : /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.20_model.r WARNING @ Mon, 03 Jun 2019 09:10:18: #2 Since the d (43) calculated from paired-peaks are smaller than 2*tag length, it may be influenced by unknown sequencing problem! WARNING @ Mon, 03 Jun 2019 09:10:18: #2 You may need to consider one of the other alternative d(s): 43 WARNING @ Mon, 03 Jun 2019 09:10:18: #2 You can restart the process with --nomodel --extsize XXX with your choice or an arbitrary number. Nontheless, MACS will continute computing. INFO @ Mon, 03 Jun 2019 09:10:18: #3 Call peaks... INFO @ Mon, 03 Jun 2019 09:10:18: #3 Pre-compute pvalue-qvalue table... INFO @ Mon, 03 Jun 2019 09:10:26: 15000000 INFO @ Mon, 03 Jun 2019 09:10:35: #1 tag size is determined as 50 bps INFO @ Mon, 03 Jun 2019 09:10:35: #1 tag size = 50 INFO @ Mon, 03 Jun 2019 09:10:35: #1 total tags in treatment: 15885888 INFO @ Mon, 03 Jun 2019 09:10:35: #1 user defined the maximum tags... INFO @ Mon, 03 Jun 2019 09:10:35: #1 filter out redundant tags at the same location and the same strand by allowing at most 1 tag(s) INFO @ Mon, 03 Jun 2019 09:10:36: #1 tags after filtering in treatment: 15885888 INFO @ Mon, 03 Jun 2019 09:10:36: #1 Redundant rate of treatment: 0.00 INFO @ Mon, 03 Jun 2019 09:10:36: #1 finished! INFO @ Mon, 03 Jun 2019 09:10:36: #2 Build Peak Model... INFO @ Mon, 03 Jun 2019 09:10:36: #2 looking for paired plus/minus strand peaks... INFO @ Mon, 03 Jun 2019 09:10:37: #2 number of paired peaks: 156 WARNING @ Mon, 03 Jun 2019 09:10:37: Fewer paired peaks (156) than 1000! Model may not be build well! Lower your MFOLD parameter may erase this warning. Now I will use 156 pairs to build model! INFO @ Mon, 03 Jun 2019 09:10:37: start model_add_line... INFO @ Mon, 03 Jun 2019 09:10:37: start X-correlation... INFO @ Mon, 03 Jun 2019 09:10:37: end of X-cor INFO @ Mon, 03 Jun 2019 09:10:37: #2 finished! INFO @ Mon, 03 Jun 2019 09:10:37: #2 predicted fragment length is 43 bps INFO @ Mon, 03 Jun 2019 09:10:37: #2 alternative fragment length(s) may be 43 bps INFO @ Mon, 03 Jun 2019 09:10:37: #2.2 Generate R script for model : /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.05_model.r WARNING @ Mon, 03 Jun 2019 09:10:37: #2 Since the d (43) calculated from paired-peaks are smaller than 2*tag length, it may be influenced by unknown sequencing problem! WARNING @ Mon, 03 Jun 2019 09:10:37: #2 You may need to consider one of the other alternative d(s): 43 WARNING @ Mon, 03 Jun 2019 09:10:37: #2 You can restart the process with --nomodel --extsize XXX with your choice or an arbitrary number. Nontheless, MACS will continute computing. INFO @ Mon, 03 Jun 2019 09:10:37: #3 Call peaks... INFO @ Mon, 03 Jun 2019 09:10:37: #3 Pre-compute pvalue-qvalue table... INFO @ Mon, 03 Jun 2019 09:10:56: #3 Call peaks for each chromosome... INFO @ Mon, 03 Jun 2019 09:10:59: #3 Call peaks for each chromosome... INFO @ Mon, 03 Jun 2019 09:11:16: #4 Write output xls file... /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.10_peaks.xls INFO @ Mon, 03 Jun 2019 09:11:16: #4 Write peak in narrowPeak format file... /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.10_peaks.narrowPeak INFO @ Mon, 03 Jun 2019 09:11:16: #4 Write summits bed file... /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.10_summits.bed INFO @ Mon, 03 Jun 2019 09:11:16: Done! pass1 - making usageList (12 chroms): 1 millis pass2 - checking and writing primary data (1614 records, 4 fields): 6 millis CompletedMACS2peakCalling INFO @ Mon, 03 Jun 2019 09:11:19: #3 Call peaks for each chromosome... INFO @ Mon, 03 Jun 2019 09:11:19: #4 Write output xls file... /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.20_peaks.xls INFO @ Mon, 03 Jun 2019 09:11:19: #4 Write peak in narrowPeak format file... /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.20_peaks.narrowPeak INFO @ Mon, 03 Jun 2019 09:11:19: #4 Write summits bed file... /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.20_summits.bed INFO @ Mon, 03 Jun 2019 09:11:19: Done! pass1 - making usageList (12 chroms): 1 millis pass2 - checking and writing primary data (1055 records, 4 fields): 5 millis CompletedMACS2peakCalling INFO @ Mon, 03 Jun 2019 09:11:40: #4 Write output xls file... /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.05_peaks.xls INFO @ Mon, 03 Jun 2019 09:11:40: #4 Write peak in narrowPeak format file... /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.05_peaks.narrowPeak INFO @ Mon, 03 Jun 2019 09:11:40: #4 Write summits bed file... /home/okishinya/chipatlas/results/dm3/SRX287729/SRX287729.05_summits.bed INFO @ Mon, 03 Jun 2019 09:11:40: Done! pass1 - making usageList (14 chroms): 2 millis pass2 - checking and writing primary data (1983 records, 4 fields): 5 millis CompletedMACS2peakCalling BedGraph に変換しました。 BigWig に変換中... BigWig に変換しました。