typemheapstruct {// 锁必须只能从系统栈获得,否则当被扩容的栈持有时会发生死锁。 lock mutex pages pageAlloc// 页分配器的数据结构// allspans是一个所有m曾经创建过的span切片。每个mspan恰好出现一次// 它的所有span的内存通常在这里管理,可以像堆增长一样重新分配和移动。// 通常情况下,它被mheap_.lock保护,阻止当前访问、释放备用存储。// 在STW期间访问可能不会加锁,但是必须保证内存分配在访问期间不发生// (因为可能释放备用存储)。 allspans []*mspan// all spans out there// scavengeGoal is the amount of total retained heap memory (measured by// heapRetained) that the runtime will try to maintain by returning memory// to the OS. scavengeGoal uint64// Page reclaimer state// reclaimIndex is the page index in allArenas of next page to// reclaim. Specifically, it refers to page (i %// pagesPerArena) of arena allArenas[i / pagesPerArena].//// If this is >= 1<<63, the page reclaimer is done scanning// the page marks.//// This is accessed atomically. reclaimIndex uint64// reclaimCredit is spare credit for extra pages swept. Since// the page reclaimer works in large chunks, it may reclaim// more than requested. Any spare pages released go to this// credit pool.//// This is accessed atomically. reclaimCredit uintptr// arenas is the heap arena map. It points to the metadata for// the heap for every arena frame of the entire usable virtual// address space.//// Use arenaIndex to compute indexes into this array.//// For regions of the address space that are not backed by the// Go heap, the arena map contains nil.//// Modifications are protected by mheap_.lock. Reads can be// performed without locking; however, a given entry can// transition from nil to non-nil at any time when the lock// isn't held. (Entries never transitions back to nil.)//// In general, this is a two-level mapping consisting of an L1// map and possibly many L2 maps. This saves space when there// are a huge number of arena frames. However, on many// platforms (even 64-bit), arenaL1Bits is 0, making this// effectively a single-level map. In this case, arenas[0]// will never be nil. arenas [1<<arenaL1Bits]*[1<<arenaL2Bits]*heapArena// arenaHints is a list of addresses at which to attempt to// add more heap arenas. This is initially populated with a// set of general hint addresses, and grown with the bounds of// actual heap arena ranges. arenaHints *arenaHint// allArenas is the arenaIndex of every mapped arena. This can// be used to iterate through the address space.//// Access is protected by mheap_.lock. However, since this is// append-only and old backing arrays are never freed, it is// safe to acquire mheap_.lock, copy the slice header, and// then release mheap_.lock. allArenas []arenaIdx// sweepArenas is a snapshot of allArenas taken at the// beginning of the sweep cycle. This can be read safely by// simply blocking GC (by disabling preemption). sweepArenas []arenaIdx// markArenas is a snapshot of allArenas taken at the beginning// of the mark cycle. Because allArenas is append-only, neither// this slice nor its contents will change during the mark, so// it can be read safely. markArenas []arenaIdx// curArena is the arena that the heap is currently growing// into. This should always be physPageSize-aligned. curArena struct { base, end uintptr } _ uint32// ensure 64-bit alignment of central// central 小尺寸class的空闲列表// the padding makes sure that the mcentrals are// spaced CacheLinePadSize bytes apart, so that each mcentral.lock// gets its own cache line.// central is indexed by spanClass.// 每一种class对应一个 central [numSpanClasses]struct { mcentral mcentral pad [cpu.CacheLinePadSize-unsafe.Sizeof(mcentral{})%cpu.CacheLinePadSize]byte } spanalloc fixalloc // allocator for span* cachealloc fixalloc // allocator for mcache* specialfinalizeralloc fixalloc // allocator for specialfinalizer* specialprofilealloc fixalloc // allocator for specialprofile* speciallock mutex // lock for special record allocators. arenaHintAlloc fixalloc // allocator for arenaHints unused *specialfinalizer // never set, just here to force the specialfinalizer type into DWARF}