Golang日志库glog源码阅读笔记
来源:互联网 发布:淘宝贷款利息高不高 编辑:程序博客网 时间:2024/06/14 19:33
glog包是google推出的一个golang的日志库,提供丰富的接口函数,提供不同级别的日志写入和日志文件的轮转,可将日志打印到终端或者写入到指定的路径文件中。本篇blog主要是包含了如何使用glog以及源代码中的一些片段笔记。
如何使用glog
创建项目目录,使用mkdir创建以下的目录结构
/LearningGo$ tree -L 1.├── bin├── pkg└── src3 directories
在src下面创建测试代码main.go,项目目录中使用go get github.com/golang/glog下载该库(操作之前确保GOPATH指向了该项目目录)
package mainimport ( "fmt" "os" "flag" "github.com/golang/glog")func usage(){ fmt.Fprintf(os.Stderr,"Usage: ./Program -stderrthreshold=[INFO|WARNING||ERROR|FATEL] -log_dir=[string]\n") flag.PrintDefaults() os.Exit(2)}func init(){ flag.Usage=usage flag.Parse()}func main(){ printLines:=100 for i:=0;i<printLines;i++{ glog.Errorf("Error Line:%d\n",i+1) glog.Infof("Info Line:%d\n",i+1) glog.Warningf("Warning Line:%d\n",i+1) } glog.Flush()}
上面的代码中,我们使用了flag.Parse来解析输入的参数变量,虽然我们此处未对于glog的输入参数进行处理,但是glog的源码中init()函数已经完成了这部分工作,新加入了很多的配置参数,Errorf() Infof()以及Warningf()等属于不同级别的日志写入操作函数, Flush()确保了缓存中的数据依次写入到文件中,编译执行上述代码。
$ ./main -log_dir="./logs" -stderrthreshold="ERROR"...E1228 09:26:21.750647 28573 main.go:24] Error Line:95E1228 09:26:21.750668 28573 main.go:24] Error Line:96E1228 09:26:21.750689 28573 main.go:24] Error Line:97E1228 09:26:21.750710 28573 main.go:24] Error Line:98E1228 09:26:21.750734 28573 main.go:24] Error Line:99E1228 09:26:21.750756 28573 main.go:24] Error Line:100$ ./main -log_dir="./logs" -stderrthreshold="FATAL"$ tree logs/ -L 1logs/├── main.ERROR -> main.mike-Lenovo-Product.mike.log.ERROR.20161228-092006.28370├── main.INFO -> main.mike-Lenovo-Product.mike.log.INFO.20161228-092006.28370├── main.mike-Lenovo-Product.mike.log.ERROR.20161228-092006.28370├── main.mike-Lenovo-Product.mike.log.INFO.20161228-092006.28370├── main.mike-Lenovo-Product.mike.log.WARNING.20161228-092006.28370└── main.WARNING -> main.mike-Lenovo-Product.mike.log.WARNING.20161228-092006.28370
上面的代码执行过程中我们通过设置log_dir来控制写入到日志文件中的函数,而stderrthreshold确保了只有大于或者等于该级别的日志才会被输出到stderr中,也就是标准错误输出中,默认为ERROR。当设置为FATAL时候,不会再有任何error信息的输出。
源码片段分析
关于文件操作,写入文件夹的位置设定代码, 代码获得输入的参数中的log_dir参数值,如果为空则将os.TempDir()写入到日志队列中,否则传递该参数到队列数组,用于将来日志的顺序写入。
glog/golog_file.go
var logDir = flag.String("log_dir", "", "If non-empty, write log files in this directory")var logDirs []stringfunc createLogDirs() { if *logDir != "" { logDirs = append(logDirs, *logDir) } logDirs = append(logDirs, os.TempDir())}
获得用户和当前机器的hostname,并产生日志文件名的相关代码,其中利用os库来获得所需的内容。
func init() { h, err := os.Hostname() if err == nil { host = shortHostname(h) } current, err := user.Current() if err == nil { userName = current.Username } // Sanitize userName since it may contain filepath separators on Windows. userName = strings.Replace(userName, `\`, "_", -1)}// logName returns a new log file name containing tag, with start time t, and// the name for the symlink for tag.func logName(tag string, t time.Time) (name, link string) { name = fmt.Sprintf("%s.%s.%s.log.%s.%04d%02d%02d-%02d%02d%02d.%d", program, host, userName, tag, t.Year(), t.Month(), t.Day(), t.Hour(), t.Minute(), t.Second(), pid) return name, program + "." + tag}
生成文件的函数,创建日志文件,这里使用sync.once来管理创建流程,防止多次执行创建日志文件夹,后面的则是创建日志的流程,以及创建日志的软连接的过程。
var onceLogDirs sync.Once// create creates a new log file and returns the file and its filename, which// contains tag ("INFO", "FATAL", etc.) and t. If the file is created// successfully, create also attempts to update the symlink for that tag, ignoring// errors.func create(tag string, t time.Time) (f *os.File, filename string, err error) { onceLogDirs.Do(createLogDirs) if len(logDirs) == 0 { return nil, "", errors.New("log: no log dirs") } name, link := logName(tag, t) var lastErr error for _, dir := range logDirs { fname := filepath.Join(dir, name) f, err := os.Create(fname) if err == nil { symlink := filepath.Join(dir, link) os.Remove(symlink) // ignore err os.Symlink(name, symlink) // ignore err return f, fname, nil } lastErr = err } return nil, "", fmt.Errorf("log: cannot create log: %v", lastErr)}
glog/golog.go
我们对外使用的接口函数的具体实现, 这里主要是要确保缓存数据在写入的时候保证只写入一次,而且不会出现丢数据的现象,因此我们在操作对象中(类型为loggingT)包含一个sync.Mutex锁对象
const ( infoLog severity = iota warningLog errorLog fatalLog numSeverity = 4)const severityChar = "IWEF"var severityName = []string{ infoLog: "INFO", warningLog: "WARNING", errorLog: "ERROR", fatalLog: "FATAL",}type loggingT struct { // Boolean flags. Not handled atomically because the flag.Value interface // does not let us avoid the =true, and that shorthand is necessary for // compatibility. TODO: does this matter enough to fix? Seems unlikely. toStderr bool // The -logtostderr flag. alsoToStderr bool // The -alsologtostderr flag. // Level flag. Handled atomically. stderrThreshold severity // The -stderrthreshold flag. // freeList is a list of byte buffers, maintained under freeListMu. freeList *buffer // freeListMu maintains the free list. It is separate from the main mutex // so buffers can be grabbed and printed to without holding the main lock, // for better parallelization. freeListMu sync.Mutex // mu protects the remaining elements of this structure and is // used to synchronize logging. mu sync.Mutex // file holds writer for each of the log types. file [numSeverity]flushSyncWriter // pcs is used in V to avoid an allocation when computing the caller's PC. pcs [1]uintptr // vmap is a cache of the V Level for each V() call site, identified by PC. // It is wiped whenever the vmodule flag changes state. vmap map[uintptr]Level // filterLength stores the length of the vmodule filter chain. If greater // than zero, it means vmodule is enabled. It may be read safely // using sync.LoadInt32, but is only modified under mu. filterLength int32 // traceLocation is the state of the -log_backtrace_at flag. traceLocation traceLocation // These flags are modified only under lock, although verbosity may be fetched // safely using atomic.LoadInt32. vmodule moduleSpec // The state of the -vmodule flag. verbosity Level // V logging level, the value of the -v flag/}var logging loggingT// Fatal logs to the FATAL, ERROR, WARNING, and INFO logs,// including a stack trace of all running goroutines, then calls os.Exit(255).// Arguments are handled in the manner of fmt.Print; a newline is appended if missing.func Fatal(args ...interface{}) { logging.print(fatalLog, args...)}func (l *loggingT) print(s severity, args ...interface{}) { l.printDepth(s, 1, args...)}func (l *loggingT) printDepth(s severity, depth int, args ...interface{}) { buf, file, line := l.header(s, depth) fmt.Fprint(buf, args...) if buf.Bytes()[buf.Len()-1] != '\n' { buf.WriteByte('\n') } l.output(s, buf, file, line, false)}
创建定时的刷新数据到磁盘代码,定时执行,而不是一有数据就执行flush,可以提升数据的执行效率。
const flushInterval = 30 * time.Second// flushDaemon periodically flushes the log file buffers.func (l *loggingT) flushDaemon() { for _ = range time.NewTicker(flushInterval).C { l.lockAndFlushAll() }}// lockAndFlushAll is like flushAll but locks l.mu first.func (l *loggingT) lockAndFlushAll() { l.mu.Lock() l.flushAll() l.mu.Unlock()}// flushAll flushes all the logs and attempts to "sync" their data to disk.// l.mu is held.func (l *loggingT) flushAll() { // Flush from fatal down, in case there's trouble flushing. for s := fatalLog; s >= infoLog; s-- { file := l.file[s] if file != nil { file.Flush() // ignore error file.Sync() // ignore error } }}
其中的Flush和Sync均为接口flushSyncWriter的函数
// flushSyncWriter is the interface satisfied by logging destinations.type flushSyncWriter interface { Flush() error Sync() error io.Writer}
核心代码里面包含的一个具有超时机制的Flush操作,防止长期的Flush阻塞.当超过一定时间的时候直接报警到stderr中
// timeoutFlush calls Flush and returns when it completes or after timeout// elapses, whichever happens first. This is needed because the hooks invoked// by Flush may deadlock when glog.Fatal is called from a hook that holds// a lock.func timeoutFlush(timeout time.Duration) { done := make(chan bool, 1) go func() { Flush() // calls logging.lockAndFlushAll() done <- true }() select { case <-done: case <-time.After(timeout): fmt.Fprintln(os.Stderr, "glog: Flush took longer than", timeout) }}
关于日志的轮询记录代码
func (sb *syncBuffer) Write(p []byte) (n int, err error) { if sb.nbytes+uint64(len(p)) >= MaxSize { if err := sb.rotateFile(time.Now()); err != nil { sb.logger.exit(err) } } n, err = sb.Writer.Write(p) sb.nbytes += uint64(n) if err != nil { sb.logger.exit(err) } return}// rotateFile closes the syncBuffer's file and starts a new one.func (sb *syncBuffer) rotateFile(now time.Time) error { if sb.file != nil { sb.Flush() sb.file.Close() } var err error sb.file, _, err = create(severityName[sb.sev], now) sb.nbytes = 0 if err != nil { return err } sb.Writer = bufio.NewWriterSize(sb.file, bufferSize) // Write header. var buf bytes.Buffer fmt.Fprintf(&buf, "Log file created at: %s\n", now.Format("2006/01/02 15:04:05")) fmt.Fprintf(&buf, "Running on machine: %s\n", host) fmt.Fprintf(&buf, "Binary: Built with %s %s for %s/%s\n", runtime.Compiler, runtime.Version(), runtime.GOOS, runtime.GOARCH) fmt.Fprintf(&buf, "Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg\n") n, err := sb.file.Write(buf.Bytes()) sb.nbytes += uint64(n) return err}
在该程序中,由于日志操作对象是一个共享的对象,如果我们需要变更里面的参数的话,需要确保变更的数据立即生效,而不是出现其他的多线程共享对象造成对象写入竞争的情况发生,这里使用了atomic库来完成数据的读写。比如下面的代码中:
// get returns the value of the severity.func (s *severity) get() severity { return severity(atomic.LoadInt32((*int32)(s)))}// set sets the value of the severity.func (s *severity) set(val severity) { atomic.StoreInt32((*int32)(s), int32(val))}// Things are consistent now, so enable filtering and verbosity.// They are enabled in order opposite to that in V.atomic.StoreInt32(&logging.filterLength, int32(len(filter)))
补充内容sync/atomic
下面的实例程序多个goroutine共存的时候同时对于共享数据进行操作,这里的加1操作不会导致数据的重复出现,而是依次的不断加1,虽然使用共享内存但是仍旧可以保证数据不会造成竞争情况的发生。
package mainimport ( "sync/atomic" "time" "fmt")func main() { var ops uint64=0 for i:=0;i<50;i++{ go func(){ for { atomic.AddUint64(&ops, 1) //fmt.Println("Ops:", ops) time.Sleep(time.Millisecond) } }() } time.Sleep(time.Second) opsFinal:=atomic.LoadUint64(&ops) fmt.Println("Ops:",opsFinal)}
最后,欢迎大家访问我的个人网站jsmean.com,获取更多个人技术博客内容。
- Golang日志库glog源码阅读笔记
- Golang日志库源码分析:Glog
- Glog日志库
- golang 日志库seelog 笔记
- golang context 源码阅读
- mysql源码阅读笔记- 重做日志
- jafka日志存储源码阅读笔记
- Golang Http Server源码阅读
- Golang Http Server源码阅读
- Golang Http Server源码阅读
- golang glog maxsize
- golang log日志笔记
- 如何使用Google日志库(glog)
- 如何使用Google日志库 (glog)
- 如何使用google的日志库(glog)
- google-glog:开源c++轻量级日志库
- 如何使用Google日志库 (glog)
- Golang logger日志库
- 国内外优秀Android开发者(崇拜大牛)
- filter基础、实现例子及定义自己的filter
- Maven项目创建后没有resource文件夹
- iOS10 隐私权限设置问题(Crash)
- 如何推送电子书到kindle
- Golang日志库glog源码阅读笔记
- 下划线风格转驼峰命名法
- 关于bicomb无法将数据导出到excel的问题
- 12306提示操作乘车人过于频繁
- 返回URL的域名后的部分方法
- 在指定目录下,模糊查找包含标识的文件
- Android各种事件的总结
- 百度地图的引用
- linux内核sk_buff的结构分析