INFO: task bonnie++:31785 blocked for more than 120 seconds
来源:互联网 发布:easycap 采集软件 编辑:程序博客网 时间:2024/05/22 05:22
When running some high workloads on UEK kernels on systems with a lot of memory you might see the following errors in /var/log/messages:
This is a know bug. By default Linux uses up to 40% of the available memory for file system caching. After this mark has been reached the file system flushes all outstanding data to disk causing all following IOs going synchronous. For flushing out this data to disk this there is a time limit of 120 seconds by default. In the case here the IO subsystem is not fast enough to flush the data withing 120 seconds. This especially happens on systems with a lof of memory.
The problem is solved in later kernels and there is not “fix” from Oracle. I fixed this by lowering the mark for flushing the cache from 40% to 10% by setting “vm.dirty_ratio=10″ in /etc/sysctl.conf. This setting does not influence overall database performance since you hopefully use Direct IO and bypass the file system cache completely.
INFO: task bonnie++:31785 blocked for more than 120 seconds."echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.bonnie++ D ffff810009004420 0 31785 11051 11096 (NOTLB)ffff81021c771aa8 0000000000000082 ffff81103e62ccc0 ffffffff88031cb3ffff810ac94cd6c0 0000000000000007 ffff810220347820 ffffffff80310b6000016803dfd77991 00000000001312ee ffff810220347a08 0000000000000001Call Trace:[<ffffffff88031cb3>] :jbd:do_get_write_access+0x4f9/0x530[<ffffffff800ce675>] zone_statistics+0x3e/0x6d[<ffffffff88032002>] :jbd:start_this_handle+0x2e5/0x36c[<ffffffff800a28b4>] autoremove_wake_function+0x0/0x2e[<ffffffff88032152>] :jbd:journal_start+0xc9/0x100[<ffffffff88050362>] :ext3:ext3_write_begin+0x9a/0x1cc[<ffffffff8000fda3>] generic_file_buffered_write+0x14b/0x675[<ffffffff80016679>] __generic_file_aio_write_nolock+0x369/0x3b6[<ffffffff80021850>] generic_file_aio_write+0x65/0xc1[<ffffffff8804c1b6>] :ext3:ext3_file_write+0x16/0x91[<ffffffff800182df>] do_sync_write+0xc7/0x104[<ffffffff800a28b4>] autoremove_wake_function+0x0/0x2e[<ffffffff80062ff0>] thread_return+0x62/0xfe[<ffffffff80016a81>] vfs_write+0xce/0x174[<ffffffff80017339>] sys_write+0x45/0x6e[<ffffffff8005d28d>] tracesys+0xd5/0xe0
This is a know bug. By default Linux uses up to 40% of the available memory for file system caching. After this mark has been reached the file system flushes all outstanding data to disk causing all following IOs going synchronous. For flushing out this data to disk this there is a time limit of 120 seconds by default. In the case here the IO subsystem is not fast enough to flush the data withing 120 seconds. This especially happens on systems with a lof of memory.
The problem is solved in later kernels and there is not “fix” from Oracle. I fixed this by lowering the mark for flushing the cache from 40% to 10% by setting “vm.dirty_ratio=10″ in /etc/sysctl.conf. This setting does not influence overall database performance since you hopefully use Direct IO and bypass the file system cache completely.
- INFO: task bonnie++:31785 blocked for more than 120 seconds
- INFO: task df:8145 blocked for more than 120 seconds.
- INFO: task ftest07:1247 blocked for more than 120 seconds 深入探索
- blocked for more than 120 seconds 错误
- kernel: INFO: task httpd:31276 blocked for more than 120 sec
- How to fix hung_task_timeout_secs and blocked for more than 120 seconds problem
- ORA-00494: enqueue [CF] held for too long (more than 900 seconds) cause instance crash
- AIX: ORA-29770: LMS0 (OSID 123) is hung for more than 70 seconds in 'gcs remote message'
- 解决ceph部署完成之后出现 “ 128 pgs are stuck inactive for more than 300 seconds”的问题
- Error: Found item Attr/font more than one time Error: Execution failed for task
- For more info about Neverfull
- Ubuntu 开机 Waiting for 60 seconds more for network configuration
- MongoDB Performance for more data than memory
- VIM Is More Than Enough For Programer
- UBUNTU : waiting-up-to-60-more-seconds-for-network-configuration
- which is more than the configured time (StuckThreadMaxTime) of "600" seconds
- FAILED Task attempt_xx_r_000000_0 failed to report status for 600 seconds
- Likelihood ratios for tests with more than two possible results
- 福建师范大学耘曦文学社第五届读书节征文获奖作品
- Linux-- one command per day--cat
- 白盒测试工具集
- vs2010下greta编译失败
- 常用的constant作用和配置
- INFO: task bonnie++:31785 blocked for more than 120 seconds
- Oracle用户、权限、角色管理
- linux客户端乱码
- 分析和优化云集群性能
- android alertdialog 弹出框
- Twitter开源Whisper Systems所有软件
- pthread_join/pthread_exit用法实例
- 在一个文件夹下面的所有文件中查找字符串
- 如何防止代码腐烂