而且序列化之后会进行一次数据拷贝:
- func (cfg *frozenConfig) Marshal(v interface{}) ([]byte, error) {
- stream := cfg.BorrowStream(nil)
- defer cfg.ReturnStream(stream)
- stream.WriteVal(v)
- if stream.Error != nil {
- return nil, stream.Error
- }
- result := stream.Buffer()
- copied := make([]byte, len(result))
- copy(copied, result)
- return copied, nil
- }
既然要用 buffer 那就一起吧^_^,这样可以减少多次内存分配,下读取 http.Response.Body 之前一定要记得 buffer.Reset(), 这样基本就已经完成了 http.Request.Body 和 http.Response.Body 的数据读取优化了,具体效果等上线跑一段时间稳定之后来查看吧。
效果分析
上线跑了一天,来看看效果吧。
- $ go tool pprof allocs2
- File: connect_server
- Type: alloc_space
- Time: Jan 26, 2019 at 10:27am (CST)
- Entering interactive mode (type "help" for commands, "o" for options)
- (pprof) top
- Showing nodes accounting for 295.40GB, 40.62% of 727.32GB total
- Dropped 738 nodes (cum <= 3.64GB)
- Showing top 10 nodes out of 174
- flat flat% sum% cum cum%
- 73.52GB 10.11% 10.11% 73.52GB 10.11% git.tvblack.com/tvblack/connect_server/vendor/github.com/sirupsen/logrus.(*Entry).WithFields
- 31.70GB 4.36% 14.47% 31.70GB 4.36% net/url.unescape
- 27.49GB 3.78% 18.25% 54.87GB 7.54% git.tvblack.com/tvblack/connect_server/models.LogItemsToBytes
- 27.41GB 3.77% 22.01% 27.41GB 3.77% strings.Join
- 25.04GB 3.44% 25.46% 25.04GB 3.44% bufio.NewWriterSize
- 24.81GB 3.41% 28.87% 24.81GB 3.41% bufio.NewReaderSize
- 23.91GB 3.29% 32.15% 23.91GB 3.29% regexp.(*bitState).reset
- 23.06GB 3.17% 35.32% 23.06GB 3.17% math/big.nat.make
- 19.90GB 2.74% 38.06% 20.35GB 2.80% git.tvblack.com/tvblack/connect_server/vendor/github.com/json-iterator/go.(*Iterator).readStringSlowPath
- 18.58GB 2.56% 40.62% 19.12GB 2.63% net/textproto.(*Reader).ReadMIMEHeader
(编辑:晋中站长网)
【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!
|