线上问题处理整理总结
来源:互联网 发布:莱昂纳德14总决赛数据 编辑:程序博客网 时间:2024/05/17 06:56
1.处理磁盘空间满的问题,查找大于某个上限的文件
find / -xdev -size+500M -exec ls -l {} \;
2.避免停机 ,立即释放空间的示例 echo "">catalina.out
3.查看磁盘空间文件大小按照大小进行排序:du -sh /usr | sort -nr
--内存OOM问题处理的命令
假设pid为6055
jmap -heap 6055
[root@SZB-L0032773 ~]# jmap -heap 6055
Attaching to process ID 6055, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.91-b14
using parallel threads in the new generation.
using thread-local object allocation.
Concurrent Mark-Sweep GC
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 1048576000 (1000.0MB)
NewSize = 174456832 (166.375MB)
MaxNewSize = 174456832 (166.375MB)
OldSize = 874119168 (833.625MB)
NewRatio = 2
SurvivorRatio = 8
MetaspaceSize = 21807104 (20.796875MB)
CompressedClassSpaceSize = 1073741824 (1024.0MB)
MaxMetaspaceSize = 17592186044415 MB
G1HeapRegionSize = 0 (0.0MB)
Heap Usage:
New Generation (Eden + 1 Survivor Space):
capacity = 157024256 (149.75MB)
used = 124465352 (118.69940948486328MB)
free = 32558904 (31.05059051513672MB)
79.26504807002556% used
Eden Space:
capacity = 139591680 (133.125MB)
used = 124287480 (118.52977752685547MB)
free = 15304200 (14.595222473144531MB)
89.03645260233274% used
From Space:
capacity = 17432576 (16.625MB)
used = 177872 (0.1696319580078125MB)
free = 17254704 (16.455368041992188MB)
1.0203426045582706% used
To Space:
capacity = 17432576 (16.625MB)
used = 0 (0.0MB)
free = 17432576 (16.625MB)
0.0% used
concurrent mark-sweep generation:
capacity = 874119168 (833.625MB)
used = 36658360 (34.96013641357422MB)
free = 837460808 (798.6648635864258MB)
4.193748557633734% used
26258 interned Strings occupying 3039944 bytes.
分析年轻代和年老代占用内存占有;
----------------------------------------------------------------------------------------------------------------
jmap -history:live 6055 | more
用于分析耗费内存比较大的对象,确认是否有不断创建对象的线程,或资源池未进行关闭
----------------------------------------------------------------------------------------------------------------
确认资源是否耗尽
pstree
4. 线程block、线程数暴涨
jstack -l pid |wc -l
jstack -l pid |grep "BLOCKED"|wc -l
jstack -l pid |grep "Waiting on condition"|wc -l
线程block问题一般是等待io、等待网络、等待监视器锁等造成,可能会导致请求超时、造成造成线程数暴涨导致系统502等。
如果出现这种问题,主要是关注jstack 出来的BLOCKED、Waiting on condition、Waiting on monitor entry等状态信息。