package runtime
Import Path
	runtime (on go.dev)
Dependency Relation
	imports 19 packages, and imported by 53 packages
Involved Source Files
	    alg.go
	    arena.go
	    asan0.go
	    atomic_pointer.go
	    badlinkname.go
	    badlinkname_linux.go
	    cgo.go
	    cgo_mmap.go
	    cgo_sigaction.go
	    cgocall.go
	    cgocallback.go
	    cgocheck.go
	    chan.go
	    checkptr.go
	    compiler.go
	    complex.go
	    coro.go
	    covercounter.go
	    covermeta.go
	    cpuflags.go
	    cpuflags_amd64.go
	    cpuprof.go
	    cputicks.go
	    create_file_unix.go
	    debug.go
	    debugcall.go
	    debuglog.go
	    debuglog_off.go
	    defs_linux_amd64.go
	    env_posix.go
	    error.go
	
		Package runtime contains operations that interact with Go's runtime system,
		such as functions to control goroutines. It also includes the low-level type information
		used by the reflect package; see [reflect]'s documentation for the programmable
		interface to the run-time type system.
		
		# Environment Variables
		
		The following environment variables ($name or %name%, depending on the host
		operating system) control the run-time behavior of Go programs. The meanings
		and use may change from release to release.
		
		The GOGC variable sets the initial garbage collection target percentage.
		A collection is triggered when the ratio of freshly allocated data to live data
		remaining after the previous collection reaches this percentage. The default
		is GOGC=100. Setting GOGC=off disables the garbage collector entirely.
		[runtime/debug.SetGCPercent] allows changing this percentage at run time.
		
		The GOMEMLIMIT variable sets a soft memory limit for the runtime. This memory limit
		includes the Go heap and all other memory managed by the runtime, and excludes
		external memory sources such as mappings of the binary itself, memory managed in
		other languages, and memory held by the operating system on behalf of the Go
		program. GOMEMLIMIT is a numeric value in bytes with an optional unit suffix.
		The supported suffixes include B, KiB, MiB, GiB, and TiB. These suffixes
		represent quantities of bytes as defined by the IEC 80000-13 standard. That is,
		they are based on powers of two: KiB means 2^10 bytes, MiB means 2^20 bytes,
		and so on. The default setting is [math.MaxInt64], which effectively disables the
		memory limit. [runtime/debug.SetMemoryLimit] allows changing this limit at run
		time.
		
		The GODEBUG variable controls debugging variables within the runtime.
		It is a comma-separated list of name=val pairs setting these named variables:
		
			clobberfree: setting clobberfree=1 causes the garbage collector to
			clobber the memory content of an object with bad content when it frees
			the object.
		
			cpu.*: cpu.all=off disables the use of all optional instruction set extensions.
			cpu.extension=off disables use of instructions from the specified instruction set extension.
			extension is the lower case name for the instruction set extension such as sse41 or avx
			as listed in internal/cpu package. As an example cpu.avx=off disables runtime detection
			and thereby use of AVX instructions.
		
			cgocheck: setting cgocheck=0 disables all checks for packages
			using cgo to incorrectly pass Go pointers to non-Go code.
			Setting cgocheck=1 (the default) enables relatively cheap
			checks that may miss some errors. A more complete, but slow,
			cgocheck mode can be enabled using GOEXPERIMENT (which
			requires a rebuild), see https://pkg.go.dev/internal/goexperiment for details.
		
			disablethp: setting disablethp=1 on Linux disables transparent huge pages for the heap.
			It has no effect on other platforms. disablethp is meant for compatibility with versions
			of Go before 1.21, which stopped working around a Linux kernel default that can result
			in significant memory overuse. See https://go.dev/issue/64332. This setting will be
			removed in a future release, so operators should tweak their Linux configuration to suit
			their needs before then. See https://go.dev/doc/gc-guide#Linux_transparent_huge_pages.
		
			dontfreezetheworld: by default, the start of a fatal panic or throw
			"freezes the world", preempting all threads to stop all running
			goroutines, which makes it possible to traceback all goroutines, and
			keeps their state close to the point of panic. Setting
			dontfreezetheworld=1 disables this preemption, allowing goroutines to
			continue executing during panic processing. Note that goroutines that
			naturally enter the scheduler will still stop. This can be useful when
			debugging the runtime scheduler, as freezetheworld perturbs scheduler
			state and thus may hide problems.
		
			efence: setting efence=1 causes the allocator to run in a mode
			where each object is allocated on a unique page and addresses are
			never recycled.
		
			gccheckmark: setting gccheckmark=1 enables verification of the
			garbage collector's concurrent mark phase by performing a
			second mark pass while the world is stopped.  If the second
			pass finds a reachable object that was not found by concurrent
			mark, the garbage collector will panic.
		
			gcpacertrace: setting gcpacertrace=1 causes the garbage collector to
			print information about the internal state of the concurrent pacer.
		
			gcshrinkstackoff: setting gcshrinkstackoff=1 disables moving goroutines
			onto smaller stacks. In this mode, a goroutine's stack can only grow.
		
			gcstoptheworld: setting gcstoptheworld=1 disables concurrent garbage collection,
			making every garbage collection a stop-the-world event. Setting gcstoptheworld=2
			also disables concurrent sweeping after the garbage collection finishes.
		
			gctrace: setting gctrace=1 causes the garbage collector to emit a single line to standard
			error at each collection, summarizing the amount of memory collected and the
			length of the pause. The format of this line is subject to change. Included in
			the explanation below is also the relevant runtime/metrics metric for each field.
			Currently, it is:
				gc # @#s #%: #+#+# ms clock, #+#/#/#+# ms cpu, #->#-># MB, # MB goal, # MB stacks, #MB globals, # P
			where the fields are as follows:
				gc #         the GC number, incremented at each GC
				@#s          time in seconds since program start
				#%           percentage of time spent in GC since program start
				#+...+#      wall-clock/CPU times for the phases of the GC
				#->#-># MB   heap size at GC start, at GC end, and live heap, or /gc/scan/heap:bytes
				# MB goal    goal heap size, or /gc/heap/goal:bytes
				# MB stacks  estimated scannable stack size, or /gc/scan/stack:bytes
				# MB globals scannable global size, or /gc/scan/globals:bytes
				# P          number of processors used, or /sched/gomaxprocs:threads
			The phases are stop-the-world (STW) sweep termination, concurrent
			mark and scan, and STW mark termination. The CPU times
			for mark/scan are broken down in to assist time (GC performed in
			line with allocation), background GC time, and idle GC time.
			If the line ends with "(forced)", this GC was forced by a
			runtime.GC() call.
		
			harddecommit: setting harddecommit=1 causes memory that is returned to the OS to
			also have protections removed on it. This is the only mode of operation on Windows,
			but is helpful in debugging scavenger-related issues on other platforms. Currently,
			only supported on Linux.
		
			inittrace: setting inittrace=1 causes the runtime to emit a single line to standard
			error for each package with init work, summarizing the execution time and memory
			allocation. No information is printed for inits executed as part of plugin loading
			and for packages without both user defined and compiler generated init work.
			The format of this line is subject to change. Currently, it is:
				init # @#ms, # ms clock, # bytes, # allocs
			where the fields are as follows:
				init #      the package name
				@# ms       time in milliseconds when the init started since program start
				# clock     wall-clock time for package initialization work
				# bytes     memory allocated on the heap
				# allocs    number of heap allocations
		
			madvdontneed: setting madvdontneed=0 will use MADV_FREE
			instead of MADV_DONTNEED on Linux when returning memory to the
			kernel. This is more efficient, but means RSS numbers will
			drop only when the OS is under memory pressure. On the BSDs and
			Illumos/Solaris, setting madvdontneed=1 will use MADV_DONTNEED instead
			of MADV_FREE. This is less efficient, but causes RSS numbers to drop
			more quickly.
		
			memprofilerate: setting memprofilerate=X will update the value of runtime.MemProfileRate.
			When set to 0 memory profiling is disabled.  Refer to the description of
			MemProfileRate for the default value.
		
			profstackdepth: profstackdepth=128 (the default) will set the maximum stack
			depth used by all pprof profilers except for the CPU profiler to 128 frames.
			Stack traces that exceed this limit will be truncated to the limit starting
			from the leaf frame. Setting profstackdepth to any value above 1024 will
			silently default to 1024. Future versions of Go may remove this limitation
			and extend profstackdepth to apply to the CPU profiler and execution tracer.
		
			pagetrace: setting pagetrace=/path/to/file will write out a trace of page events
			that can be viewed, analyzed, and visualized using the x/debug/cmd/pagetrace tool.
			Build your program with GOEXPERIMENT=pagetrace to enable this functionality. Do not
			enable this functionality if your program is a setuid binary as it introduces a security
			risk in that scenario. Currently not supported on Windows, plan9 or js/wasm. Setting this
			option for some applications can produce large traces, so use with care.
		
			panicnil: setting panicnil=1 disables the runtime error when calling panic with nil
			interface value or an untyped nil.
		
			runtimecontentionstacks: setting runtimecontentionstacks=1 enables inclusion of call stacks
			related to contention on runtime-internal locks in the "mutex" profile, subject to the
			MutexProfileFraction setting. When runtimecontentionstacks=0, contention on
			runtime-internal locks will report as "runtime._LostContendedRuntimeLock". When
			runtimecontentionstacks=1, the call stacks will correspond to the unlock call that released
			the lock. But instead of the value corresponding to the amount of contention that call
			stack caused, it corresponds to the amount of time the caller of unlock had to wait in its
			original call to lock. A future release is expected to align those and remove this setting.
		
			invalidptr: invalidptr=1 (the default) causes the garbage collector and stack
			copier to crash the program if an invalid pointer value (for example, 1)
			is found in a pointer-typed location. Setting invalidptr=0 disables this check.
			This should only be used as a temporary workaround to diagnose buggy code.
			The real fix is to not store integers in pointer-typed locations.
		
			sbrk: setting sbrk=1 replaces the memory allocator and garbage collector
			with a trivial allocator that obtains memory from the operating system and
			never reclaims any memory.
		
			scavtrace: setting scavtrace=1 causes the runtime to emit a single line to standard
			error, roughly once per GC cycle, summarizing the amount of work done by the
			scavenger as well as the total amount of memory returned to the operating system
			and an estimate of physical memory utilization. The format of this line is subject
			to change, but currently it is:
				scav # KiB work (bg), # KiB work (eager), # KiB total, #% util
			where the fields are as follows:
				# KiB work (bg)    the amount of memory returned to the OS in the background since
				                   the last line
				# KiB work (eager) the amount of memory returned to the OS eagerly since the last line
				# KiB now          the amount of address space currently returned to the OS
				#% util            the fraction of all unscavenged heap memory which is in-use
			If the line ends with "(forced)", then scavenging was forced by a
			debug.FreeOSMemory() call.
		
			scheddetail: setting schedtrace=X and scheddetail=1 causes the scheduler to emit
			detailed multiline info every X milliseconds, describing state of the scheduler,
			processors, threads and goroutines.
		
			schedtrace: setting schedtrace=X causes the scheduler to emit a single line to standard
			error every X milliseconds, summarizing the scheduler state.
		
			tracebackancestors: setting tracebackancestors=N extends tracebacks with the stacks at
			which goroutines were created, where N limits the number of ancestor goroutines to
			report. This also extends the information returned by runtime.Stack.
			Setting N to 0 will report no ancestry information.
		
			tracefpunwindoff: setting tracefpunwindoff=1 forces the execution tracer to
			use the runtime's default stack unwinder instead of frame pointer unwinding.
			This increases tracer overhead, but could be helpful as a workaround or for
			debugging unexpected regressions caused by frame pointer unwinding.
		
			traceadvanceperiod: the approximate period in nanoseconds between trace generations. Only
			applies if a program is built with GOEXPERIMENT=exectracer2. Used primarily for testing
			and debugging the execution tracer.
		
			tracecheckstackownership: setting tracecheckstackownership=1 enables a debug check in the
			execution tracer to double-check stack ownership before taking a stack trace.
		
			asyncpreemptoff: asyncpreemptoff=1 disables signal-based
			asynchronous goroutine preemption. This makes some loops
			non-preemptible for long periods, which may delay GC and
			goroutine scheduling. This is useful for debugging GC issues
			because it also disables the conservative stack scanning used
			for asynchronously preempted goroutines.
		
		The [net] and [net/http] packages also refer to debugging variables in GODEBUG.
		See the documentation for those packages for details.
		
		The GOMAXPROCS variable limits the number of operating system threads that
		can execute user-level Go code simultaneously. There is no limit to the number of threads
		that can be blocked in system calls on behalf of Go code; those do not count against
		the GOMAXPROCS limit. This package's [GOMAXPROCS] function queries and changes
		the limit.
		
		The GORACE variable configures the race detector, for programs built using -race.
		See the [Race Detector article] for details.
		
		The GOTRACEBACK variable controls the amount of output generated when a Go
		program fails due to an unrecovered panic or an unexpected runtime condition.
		By default, a failure prints a stack trace for the current goroutine,
		eliding functions internal to the run-time system, and then exits with exit code 2.
		The failure prints stack traces for all goroutines if there is no current goroutine
		or the failure is internal to the run-time.
		GOTRACEBACK=none omits the goroutine stack traces entirely.
		GOTRACEBACK=single (the default) behaves as described above.
		GOTRACEBACK=all adds stack traces for all user-created goroutines.
		GOTRACEBACK=system is like “all” but adds stack frames for run-time functions
		and shows goroutines created internally by the run-time.
		GOTRACEBACK=crash is like “system” but crashes in an operating system-specific
		manner instead of exiting. For example, on Unix systems, the crash raises
		SIGABRT to trigger a core dump.
		GOTRACEBACK=wer is like “crash” but doesn't disable Windows Error Reporting (WER).
		For historical reasons, the GOTRACEBACK settings 0, 1, and 2 are synonyms for
		none, all, and system, respectively.
		The [runtime/debug.SetTraceback] function allows increasing the
		amount of output at run time, but it cannot reduce the amount below that
		specified by the environment variable.
		
		The GOARCH, GOOS, GOPATH, and GOROOT environment variables complete
		the set of Go environment variables. They influence the building of Go programs
		(see [cmd/go] and [go/build]).
		GOARCH, GOOS, and GOROOT are recorded at compile time and made available by
		constants or functions in this package, but they do not influence the execution
		of the run-time system.
		
		# Security
		
		On Unix platforms, Go's runtime system behaves slightly differently when a
		binary is setuid/setgid or executed with setuid/setgid-like properties, in order
		to prevent dangerous behaviors. On Linux this is determined by checking for the
		AT_SECURE flag in the auxiliary vector, on the BSDs and Solaris/Illumos it is
		determined by checking the issetugid syscall, and on AIX it is determined by
		checking if the uid/gid match the effective uid/gid.
		
		When the runtime determines the binary is setuid/setgid-like, it does three main
		things:
		  - The standard input/output file descriptors (0, 1, 2) are checked to be open.
		    If any of them are closed, they are opened pointing at /dev/null.
		  - The value of the GOTRACEBACK environment variable is set to 'none'.
		  - When a signal is received that terminates the program, or the program
		    encounters an unrecoverable panic that would otherwise override the value
		    of GOTRACEBACK, the goroutine stack, registers, and other memory related
		    information are omitted.
		
		[Race Detector article]: https://go.dev/doc/articles/race_detector
	    fastlog2.go
	    fastlog2table.go
	    fds_unix.go
	    float.go
	    hash64.go
	    heapdump.go
	    histogram.go
	    iface.go
	    lfstack.go
	    linkname.go
	    linkname_swiss.go
	    linkname_unix.go
	    lock_futex.go
	    lock_spinbit.go
	    lockrank.go
	    lockrank_off.go
	    malloc.go
	    map_fast32_swiss.go
	    map_fast64_swiss.go
	    map_faststr_swiss.go
	    map_swiss.go
	    mbarrier.go
	    mbitmap.go
	    mcache.go
	    mcentral.go
	    mcheckmark.go
	    mcleanup.go
	    mem.go
	    mem_linux.go
	    mem_nonsbrk.go
	    metrics.go
	    mfinal.go
	    mfixalloc.go
	    mgc.go
	    mgclimit.go
	    mgcmark.go
	    mgcpacer.go
	    mgcscavenge.go
	    mgcstack.go
	    mgcsweep.go
	    mgcwork.go
	    mheap.go
	    minmax.go
	    mpagealloc.go
	    mpagealloc_64bit.go
	    mpagecache.go
	    mpallocbits.go
	    mprof.go
	    mranges.go
	    msan0.go
	    msize.go
	    mspanset.go
	    mstats.go
	    mwbbuf.go
	    nbpipe_pipe2.go
	    netpoll.go
	    netpoll_epoll.go
	    nonwindows_stub.go
	    note_other.go
	    os_linux.go
	    os_linux_generic.go
	    os_linux_noauxv.go
	    os_linux_x86.go
	    os_nonopenbsd.go
	    os_unix.go
	    panic.go
	    pinner.go
	    plugin.go
	    preempt.go
	    preempt_nonwindows.go
	    print.go
	    proc.go
	    profbuf.go
	    proflabel.go
	    race0.go
	    rand.go
	    rdebug.go
	    retry.go
	    runtime.go
	    runtime1.go
	    runtime2.go
	    runtime_boring.go
	    rwmutex.go
	    security_linux.go
	    security_unix.go
	    select.go
	    sema.go
	    signal_amd64.go
	    signal_linux_amd64.go
	    signal_unix.go
	    sigqueue.go
	    sigqueue_note.go
	    sigtab_linux_generic.go
	    sizeclasses.go
	    slice.go
	    softfloat64.go
	    stack.go
	    stkframe.go
	    string.go
	    stubs.go
	    stubs2.go
	    stubs3.go
	    stubs_amd64.go
	    stubs_linux.go
	    stubs_nonwasm.go
	    symtab.go
	    symtabinl.go
	    synctest.go
	    sys_nonppc64x.go
	    sys_x86.go
	    tagptr.go
	    tagptr_64bit.go
	    test_amd64.go
	    time.go
	    time_nofake.go
	    timeasm.go
	    tls_stub.go
	    trace.go
	    traceallocfree.go
	    traceback.go
	    tracebuf.go
	    tracecpu.go
	    traceevent.go
	    traceexp.go
	    tracemap.go
	    traceregion.go
	    traceruntime.go
	    tracestack.go
	    tracestatus.go
	    tracestring.go
	    tracetime.go
	    tracetype.go
	    type.go
	    typekind.go
	    unsafe.go
	    utf8.go
	    vdso_elf64.go
	    vdso_linux.go
	    vdso_linux_amd64.go
	    vgetrandom_linux.go
	    write_err.go
	    asm_amd64.h
	    asm_ppc64x.h
	    funcdata.h
	    go_tls.h
	    textflag.h
	    asm.s
	    asm_amd64.s
	    duff_amd64.s
	    ints.s
	    memclr_amd64.s
	    memmove_amd64.s
	    preempt_amd64.s
	    rt0_linux_amd64.s
	    sys_linux_amd64.s
	    test_amd64.s
	    time_linux_amd64.s
Code Examples
	
		package main
		
		import (
			"fmt"
			"os"
			"runtime"
		)
		
		func main() {
			tempFile, err := os.CreateTemp(os.TempDir(), "file.*")
			if err != nil {
				fmt.Println("failed to create temp file:", err)
				return
			}
		
			ch := make(chan struct{})
		
			// Attach a cleanup function to the file object.
			runtime.AddCleanup(&tempFile, func(fileName string) {
				if err := os.Remove(fileName); err == nil {
					fmt.Println("temp file has been removed")
				}
				ch <- struct{}{}
			}, tempFile.Name())
		
			if err := tempFile.Close(); err != nil {
				fmt.Println("failed to close temp file:", err)
				return
			}
		
			// Run the garbage collector to reclaim unreachable objects
			// and enqueue their cleanup functions.
			runtime.GC()
		
			// Wait until cleanup function is done.
			<-ch
		
		}
	
		package main
		
		import (
			"fmt"
			"runtime"
			"strings"
		)
		
		func main() {
			c := func() {
				// Ask runtime.Callers for up to 10 PCs, including runtime.Callers itself.
				pc := make([]uintptr, 10)
				n := runtime.Callers(0, pc)
				if n == 0 {
					// No PCs available. This can happen if the first argument to
					// runtime.Callers is large.
					//
					// Return now to avoid processing the zero Frame that would
					// otherwise be returned by frames.Next below.
					return
				}
		
				pc = pc[:n] // pass only valid pcs to runtime.CallersFrames
				frames := runtime.CallersFrames(pc)
		
				// Loop to get frames.
				// A fixed number of PCs can expand to an indefinite number of Frames.
				for {
					frame, more := frames.Next()
		
					// Canonicalize function name and skip callers of this function
					// for predictable example output.
					// You probably don't need this in your own code.
					function := strings.ReplaceAll(frame.Function, "main.main", "runtime_test.ExampleFrames")
					fmt.Printf("- more:%v | %s\n", more, function)
					if function == "runtime_test.ExampleFrames" {
						break
					}
		
					// Check whether there are more frames to process after this one.
					if !more {
						break
					}
				}
			}
		
			b := func() { c() }
			a := func() { b() }
		
			a()
		}
Package-Level Type Names (total 353, in which 12 are exported)
	/* sort exporteds by:  |  */	
		BlockProfileRecord describes blocking events originated
		at a particular call sequence (stack trace).
		
			Count int64
			Cycles int64
			StackRecord StackRecord
			
				// stack trace for this record; ends at first 0 entry
		
			
				Stack returns the stack trace associated with the record,
				a prefix of r.Stack0.
		
			func BlockProfile(p []BlockProfileRecord) (n int, ok bool)
			func MutexProfile(p []BlockProfileRecord) (n int, ok bool)
			
			func copyBlockProfileRecord(dst *BlockProfileRecord, src profilerecord.BlockProfileRecord)
			func expandFrames(p []BlockProfileRecord)
	
		Cleanup is a handle to a cleanup call for a specific object.
		
			
			
				id is the unique identifier for the cleanup within the arena.
			
				ptr contains the pointer to the object.
		
			
				Stop cancels the cleanup call. Stop will have no effect if the cleanup call
				has already been queued for execution (because ptr became unreachable).
				To guarantee that Stop removes the cleanup function, the caller must ensure
				that the pointer that was passed to AddCleanup is reachable across the call to Stop.
		
			func AddCleanup[T, S](ptr *T, cleanup func(S), arg S) Cleanup
	
		The Error interface identifies a run time error.
		
			( Error) Error() builtin.string
			
				RuntimeError is a no-op function but
				serves to distinguish types that are run time
				errors from ordinary errors: a type is a
				run time error if it has a RuntimeError method.
		
			*PanicNilError
			*TypeAssertionError
			
			 boundsError
			 errorAddressString
			 errorString
			 plainError
		
			 Error : error
	
		Frame is the information returned by [Frames] for each call frame.
		
			
				Entry point program counter for the function; may be zero
				if not known. If Func is not nil then Entry ==
				Func.Entry().
			
				File and Line are the file name and line number of the
				location in this frame. For non-leaf frames, this will be
				the location of a call. These may be the empty string and
				zero, respectively, if not known. The file name uses
				forward slashes, even on Windows.
			
				Func is the Func value of this call frame. This may be nil
				for non-Go code or fully inlined functions.
			
				Function is the package path-qualified function name of
				this call frame. If non-empty, this string uniquely
				identifies a single function in the program.
				This may be the empty string if not known.
				If Func is not nil then Function == Func.Name().
			Line int
			
				PC is the program counter for the location in this frame.
				For a frame that calls another frame, this will be the
				program counter of a call instruction. Because of inlining,
				multiple frames may have the same PC value, but different
				symbolic information.
			
			
				The runtime's internal view of the function. This field
				is set (funcInfo.valid() returns true) only for Go functions,
				not for C functions.
			
				startLine is the line number of the beginning of the function in
				this frame. Specifically, it is the line number of the func keyword
				for Go functions. Note that //line directives can change the
				filename and/or line number arbitrarily within a function, meaning
				that the Line - startLine offset is not always meaningful.
				
				This may be zero if not known.
		
			func (*Frames).Next() (frame Frame, more bool)
			func go.uber.org/zap/internal/stacktrace.(*Stack).Next() (_ Frame, more bool)
			
			func expandCgoFrames(pc uintptr) []Frame
			func runtime/pprof.allFrames(addr uintptr) ([]Frame, pprof.symbolizeFlag)
			func net/http.relevantCaller() Frame
		
			func go.uber.org/zap/internal/stacktrace.(*Formatter).FormatFrame(frame Frame)
			
			func makeTraceFrame(gen uintptr, f Frame) traceFrame
			func runtime_FrameStartLine(f *Frame) int
			func runtime_FrameSymbolName(f *Frame) string
			func runtime/pprof.runtime_FrameStartLine(f *Frame) int
			func runtime/pprof.runtime_FrameSymbolName(f *Frame) string
	
		Frames may be used to get function/file/line information for a
		slice of PC values returned by [Callers].
		
			
			
				callers is a slice of PCs that have not yet been expanded to frames.
			frameStore [2]Frame
			
				frames is a slice of Frames that have yet to be returned.
			
				nextPC is a next PC to expand ahead of processing callers.
		
			
				Next returns a [Frame] representing the next call frame in the slice
				of PC values. If it has already returned all call frames, Next
				returns a zero [Frame].
				
				The more result indicates whether the next call to Next will return
				a valid [Frame]. It does not necessarily indicate whether this call
				returned one.
				
				See the [Frames] example for idiomatic usage.
		
			func CallersFrames(callers []uintptr) *Frames
	
		A Func represents a Go function in the running binary.
		
			
			
				// unexported field to disallow conversions
		
			
				Entry returns the entry address of the function.
			
				FileLine returns the file name and line number of the
				source code corresponding to the program counter pc.
				The result will not be accurate if pc is not a program
				counter within f.
			
				Name returns the name of the function.
			
			(*Func) funcInfo() funcInfo
			(*Func) raw() *_func
			
				startLine returns the starting line number of the function. i.e., the line
				number of the func keyword.
		
			func FuncForPC(pc uintptr) *Func
	
		A MemProfileRecord describes the live objects allocated
		by a particular call sequence (stack trace).
		
			
				// number of bytes allocated, freed
			
				// number of objects allocated, freed
			
				// number of bytes allocated, freed
			
				// number of objects allocated, freed
			
				// stack trace for this record; ends at first 0 entry
		
			
				InUseBytes returns the number of bytes in use (AllocBytes - FreeBytes).
			
				InUseObjects returns the number of objects in use (AllocObjects - FreeObjects).
			
				Stack returns the stack trace associated with the record,
				a prefix of r.Stack0.
		
			func MemProfile(p []MemProfileRecord, inuseZero bool) (n int, ok bool)
			
			func copyMemProfileRecord(dst *MemProfileRecord, src profilerecord.MemProfileRecord)
	
		A MemStats records statistics about the memory allocator.
		
			
				Alloc is bytes of allocated heap objects.
				
				This is the same as HeapAlloc (see below).
			
				BuckHashSys is bytes of memory in profiling bucket hash tables.
			
				BySize reports per-size class allocation statistics.
				
				BySize[N] gives statistics for allocations of size S where
				BySize[N-1].Size < S ≤ BySize[N].Size.
				
				This does not report allocations larger than BySize[60].Size.
			
				DebugGC is currently unused.
			
				EnableGC indicates that GC is enabled. It is always true,
				even if GOGC=off.
			
				Frees is the cumulative count of heap objects freed.
			
				GCCPUFraction is the fraction of this program's available
				CPU time used by the GC since the program started.
				
				GCCPUFraction is expressed as a number between 0 and 1,
				where 0 means GC has consumed none of this program's CPU. A
				program's available CPU time is defined as the integral of
				GOMAXPROCS since the program started. That is, if
				GOMAXPROCS is 2 and a program has been running for 10
				seconds, its "available CPU" is 20 seconds. GCCPUFraction
				does not include CPU time used for write barrier activity.
				
				This is the same as the fraction of CPU reported by
				GODEBUG=gctrace=1.
			
				GCSys is bytes of memory in garbage collection metadata.
			
				HeapAlloc is bytes of allocated heap objects.
				
				"Allocated" heap objects include all reachable objects, as
				well as unreachable objects that the garbage collector has
				not yet freed. Specifically, HeapAlloc increases as heap
				objects are allocated and decreases as the heap is swept
				and unreachable objects are freed. Sweeping occurs
				incrementally between GC cycles, so these two processes
				occur simultaneously, and as a result HeapAlloc tends to
				change smoothly (in contrast with the sawtooth that is
				typical of stop-the-world garbage collectors).
			
				HeapIdle is bytes in idle (unused) spans.
				
				Idle spans have no objects in them. These spans could be
				(and may already have been) returned to the OS, or they can
				be reused for heap allocations, or they can be reused as
				stack memory.
				
				HeapIdle minus HeapReleased estimates the amount of memory
				that could be returned to the OS, but is being retained by
				the runtime so it can grow the heap without requesting more
				memory from the OS. If this difference is significantly
				larger than the heap size, it indicates there was a recent
				transient spike in live heap size.
			
				HeapInuse is bytes in in-use spans.
				
				In-use spans have at least one object in them. These spans
				can only be used for other objects of roughly the same
				size.
				
				HeapInuse minus HeapAlloc estimates the amount of memory
				that has been dedicated to particular size classes, but is
				not currently being used. This is an upper bound on
				fragmentation, but in general this memory can be reused
				efficiently.
			
				HeapObjects is the number of allocated heap objects.
				
				Like HeapAlloc, this increases as objects are allocated and
				decreases as the heap is swept and unreachable objects are
				freed.
			
				HeapReleased is bytes of physical memory returned to the OS.
				
				This counts heap memory from idle spans that was returned
				to the OS and has not yet been reacquired for the heap.
			
				HeapSys is bytes of heap memory obtained from the OS.
				
				HeapSys measures the amount of virtual address space
				reserved for the heap. This includes virtual address space
				that has been reserved but not yet used, which consumes no
				physical memory, but tends to be small, as well as virtual
				address space for which the physical memory has been
				returned to the OS after it became unused (see HeapReleased
				for a measure of the latter).
				
				HeapSys estimates the largest size the heap has had.
			
				LastGC is the time the last garbage collection finished, as
				nanoseconds since 1970 (the UNIX epoch).
			
				Lookups is the number of pointer lookups performed by the
				runtime.
				
				This is primarily useful for debugging runtime internals.
			
				MCacheInuse is bytes of allocated mcache structures.
			
				MCacheSys is bytes of memory obtained from the OS for
				mcache structures.
			
				MSpanInuse is bytes of allocated mspan structures.
			
				MSpanSys is bytes of memory obtained from the OS for mspan
				structures.
			
				Mallocs is the cumulative count of heap objects allocated.
				The number of live objects is Mallocs - Frees.
			
				NextGC is the target heap size of the next GC cycle.
				
				The garbage collector's goal is to keep HeapAlloc ≤ NextGC.
				At the end of each GC cycle, the target for the next cycle
				is computed based on the amount of reachable data and the
				value of GOGC.
			
				NumForcedGC is the number of GC cycles that were forced by
				the application calling the GC function.
			
				NumGC is the number of completed GC cycles.
			
				OtherSys is bytes of memory in miscellaneous off-heap
				runtime allocations.
			
				PauseEnd is a circular buffer of recent GC pause end times,
				as nanoseconds since 1970 (the UNIX epoch).
				
				This buffer is filled the same way as PauseNs. There may be
				multiple pauses per GC cycle; this records the end of the
				last pause in a cycle.
			
				PauseNs is a circular buffer of recent GC stop-the-world
				pause times in nanoseconds.
				
				The most recent pause is at PauseNs[(NumGC+255)%256]. In
				general, PauseNs[N%256] records the time paused in the most
				recent N%256th GC cycle. There may be multiple pauses per
				GC cycle; this is the sum of all pauses during a cycle.
			
				PauseTotalNs is the cumulative nanoseconds in GC
				stop-the-world pauses since the program started.
				
				During a stop-the-world pause, all goroutines are paused
				and only the garbage collector can run.
			
				StackInuse is bytes in stack spans.
				
				In-use stack spans have at least one stack in them. These
				spans can only be used for other stacks of the same size.
				
				There is no StackIdle because unused stack spans are
				returned to the heap (and hence counted toward HeapIdle).
			
				StackSys is bytes of stack memory obtained from the OS.
				
				StackSys is StackInuse, plus any memory obtained directly
				from the OS for OS thread stacks.
				
				In non-cgo programs this metric is currently equal to StackInuse
				(but this should not be relied upon, and the value may change in
				the future).
				
				In cgo programs this metric includes OS thread stacks allocated
				directly from the OS. Currently, this only accounts for one stack in
				c-shared and c-archive build modes and other sources of stacks from
				the OS (notably, any allocated by C code) are not currently measured.
				Note this too may change in the future.
			
				Sys is the total bytes of memory obtained from the OS.
				
				Sys is the sum of the XSys fields below. Sys measures the
				virtual address space reserved by the Go runtime for the
				heap, stacks, and other internal data structures. It's
				likely that not all of the virtual address space is backed
				by physical memory at any given moment, though in general
				it all was at some point.
			
				TotalAlloc is cumulative bytes allocated for heap objects.
				
				TotalAlloc increases as heap objects are allocated, but
				unlike Alloc and HeapAlloc, it does not decrease when
				objects are freed.
		
			func ReadMemStats(m *MemStats)
			
			func dumpmemstats(m *MemStats)
			func mdump(m *MemStats)
			func readmemstats_m(stats *MemStats)
			func writeheapdump_m(fd uintptr, m *MemStats)
		
			
			  var testing.memStats
	
		A PanicNilError happens when code calls panic(nil).
		
		Before Go 1.21, programs that called panic(nil) observed recover returning nil.
		Starting in Go 1.21, programs that call panic(nil) observe recover returning a *PanicNilError.
		Programs can change back to the old behavior by setting GODEBUG=panicnil=1.
		
			(*PanicNilError) Error() string
			(*PanicNilError) RuntimeError()
		
			*PanicNilError : Error
			*PanicNilError : error
	
		A Pinner is a set of Go objects each pinned to a fixed location in memory. The
		[Pinner.Pin] method pins one object, while [Pinner.Unpin] unpins all pinned
		objects. See their comments for more information.
		
			
			pinner *pinner
			pinner.refStore [5]unsafe.Pointer
			pinner.refs []unsafe.Pointer
		
			
				Pin pins a Go object, preventing it from being moved or freed by the garbage
				collector until the [Pinner.Unpin] method has been called.
				
				A pointer to a pinned object can be directly stored in C memory or can be
				contained in Go memory passed to C functions. If the pinned object itself
				contains pointers to Go objects, these objects must be pinned separately if they
				are going to be accessed from C code.
				
				The argument must be a pointer of any type or an [unsafe.Pointer].
				It's safe to call Pin on non-Go pointers, in which case Pin will do nothing.
			
				Unpin unpins all pinned objects of the [Pinner].
			
			( Pinner) unpin()
		
			
			func debugPinnerV1() *Pinner
	
		A StackRecord describes a single execution stack.
		
			
				// stack trace for this record; ends at first 0 entry
		
			
				Stack returns the stack trace associated with the record,
				a prefix of r.Stack0.
		
			func GoroutineProfile(p []StackRecord) (n int, ok bool)
			func ThreadCreateProfile(p []StackRecord) (n int, ok bool)
		
		A _defer holds an entry on the list of deferred calls.
		If you add a field here, add code to clear it in deferProcStack.
		This struct must match the code in cmd/compile/internal/ssagen/ssa.go:deferstruct
		and cmd/compile/internal/ssagen/ssa.go:(*state).call.
		Some defers will be allocated on the stack and some on the heap.
		All defers are logically part of the stack, so write barriers to
		initialize them are not required. All defers must be manually scanned,
		and for heap defers, marked.
		
			
			
				// can be nil for open-coded defers
			
				If rangefunc is true, *head is the head of the atomic linked list
				during a range-over-func execution.
			heap bool
			
				// next defer on G; can point to either heap or stack!
			
				// pc at time of defer
			
				// true for rangefunc list
			
				// sp at time of defer
		
			
			func badDefer() *_defer
			func newdefer() *_defer
		
			
			func deferconvert(d0 *_defer)
			func deferprocStack(d *_defer)
	
		Layout of in-memory per-function information prepared by linker
		See https://golang.org/s/go12symtab.
		Keep in sync with linker (../cmd/link/internal/ld/pcln.go:/pclntab)
		and with package debug/gosym and with symtab.go in package runtime.
		
			
				// Only in static data
			
			
				// in/out args size
			
				// runtime.cutab offset of this function's CU
			
				// offset of start of a deferreturn call instruction from entry, if any.
			
				// start pc, as offset from moduledata.text/pcHeader.textStart
			flag abi.FuncFlag
			
				// set for certain special runtime functions
			
				// function name, as index into moduledata.funcnametab.
			
				// must be last, must end on a uint32-aligned boundary
			npcdata uint32
			pcfile uint32
			pcln uint32
			pcsp uint32
			
				// line number of start of function (func keyword/TEXT directive)
		
			
			(*_func) funcInfo() funcInfo
			
				isInlined reports whether f should be re-interpreted as a *funcinl.
		
			
			func (*Func).raw() *_func
	
		A _panic holds information about an active panic.
		
		A _panic value must only ever live on the stack.
		
		The argp and link fields are stack pointers, but don't need special
		handling during stack growth: because they are pointer-typed and
		_panic values only live on the stack, regular stack pointer
		adjustment takes care of them.
		
			
			
				// argument to panic
			
				// pointer to arguments of deferred call run during panic; cannot move - known to liblink
			
				Extra state for handling open-coded defers.
			deferreturn bool
			fp unsafe.Pointer
			goexit bool
			
				// link to earlier panic
			lr uintptr
			
				// whether this panic has been recovered
			
				retpc stores the PC where the panic should jump back to, if the
				function last returned by _panic.next() recovers the panic.
			slotsPtr unsafe.Pointer
			
				The current stack frame that we're running deferred calls for.
			
				startPC and startSP track where _panic.start was called.
			startSP unsafe.Pointer
		
			
			(*_panic) initOpenCodedDefers(fn funcInfo, varp unsafe.Pointer) bool
			
				nextDefer returns the next deferred function to invoke, if any.
				
				Note: The "ok bool" result is necessary to correctly handle when
				the deferred function itself was nil (e.g., "defer (func())(nil)").
			
				nextFrame finds the next frame that contains deferred calls, if any.
			
				start initializes a panic to start unwinding the stack.
				
				If p.goexit is true, then start may return multiple times.
		
			
			func fatalpanic(msgs *_panic)
			func preprintpanics(p *_panic)
			func printpanics(p *_panic)
	
		activeSweep is a type that captures whether sweeping
		is done, and whether there are any outstanding sweepers.
		
		Every potential sweeper must call begin() before they look
		for work, and end() after they've finished sweeping.
		
			
			
				state is divided into two parts.
				
				The top bit (masked by sweepDrainedMask) is a boolean
				value indicating whether all the sweep work has been
				drained from the queue.
				
				The rest of the bits are a counter, indicating the
				number of outstanding concurrent sweepers.
		
			
			
				begin registers a new sweeper. Returns a sweepLocker
				for acquiring spans for sweeping. Any outstanding sweeper blocks
				sweep termination.
				
				If the sweepLocker is invalid, the caller can be sure that all
				outstanding sweep work has been drained, so there is nothing left
				to sweep. Note that there may be sweepers currently running, so
				this does not indicate that all sweeping has completed.
				
				Even if the sweepLocker is invalid, its sweepGen is always valid.
			
				end deregisters a sweeper. Must be called once for each time
				begin is called if the sweepLocker is valid.
			
				isDone returns true if all sweep work has been drained and no more
				outstanding sweepers exist. That is, when the sweep phase is
				completely done.
			
				markDrained marks the active sweep cycle as having drained
				all remaining work. This is safe to be called concurrently
				with all other methods of activeSweep, though may race.
				
				Returns true if this call was the one that actually performed
				the mark.
			
				reset sets up the activeSweep for the next sweep cycle.
				
				The world must be stopped.
			
				sweepers returns the current number of active sweepers.
	
		addrRange represents a region of address space.
		
		An addrRange must never span a gap in the address space.
		
			
			
				base and limit together represent the region of address space
				[base, limit). That is, base is inclusive, limit is exclusive.
				These are address over an offset view of the address space on
				platforms with a segmented address space, that is, on platforms
				where arenaBaseOffset != 0.
			
				base and limit together represent the region of address space
				[base, limit). That is, base is inclusive, limit is exclusive.
				These are address over an offset view of the address space on
				platforms with a segmented address space, that is, on platforms
				where arenaBaseOffset != 0.
		
			
			
				contains returns whether or not the range contains a given address.
			
				removeGreaterEqual removes all addresses in a greater than or equal
				to addr and returns the new range.
			
				size returns the size of the range represented in bytes.
			
				subtract takes the addrRange toPrune and cuts out any overlap with
				from, then returns the new range. subtract assumes that a and b
				either don't overlap at all, only overlap on one side, or are equal.
				If b is strictly contained in a, thus forcing a split, it will throw.
			
				takeFromBack takes len bytes from the end of the address range, aligning
				the limit to align after subtracting len. On success, returns the aligned
				start of the region taken and true.
			
				takeFromFront takes len bytes from the front of the address range, aligning
				the base to align first. On success, returns the aligned start of the region
				taken and true.
		
			
			func makeAddrRange(base, limit uintptr) addrRange
	
		addrRanges is a data structure holding a collection of ranges of
		address space.
		
		The ranges are coalesced eagerly to reduce the
		number ranges it holds.
		
		The slice backing store for this field is persistentalloc'd
		and thus there is no way to free it.
		
		addrRanges is not thread-safe.
		
			
			
				ranges is a slice of ranges sorted by base.
			
				sysStat is the stat to track allocations by this type
			
				totalBytes is the total amount of address space in bytes counted by
				this addrRanges.
		
			
			
				add inserts a new address range to a.
				
				r must not overlap with any address range in a and r.size() must be > 0.
			
				cloneInto makes a deep clone of a's state into b, re-using
				b's ranges if able.
			
				contains returns true if a covers the address addr.
			
				findAddrGreaterEqual returns the smallest address represented by a
				that is >= addr. Thus, if the address is represented by a,
				then it returns addr. The second return value indicates whether
				such an address exists for addr in a. That is, if addr is larger than
				any address known to a, the second return value will be false.
			
				findSucc returns the first index in a such that addr is
				less than the base of the addrRange at that index.
			(*addrRanges) init(sysStat *sysMemStat)
			
				removeGreaterEqual removes the ranges of a which are above addr, and additionally
				splits any range containing addr.
			
				removeLast removes and returns the highest-addressed contiguous range
				of a, or the last nBytes of that range, whichever is smaller. If a is
				empty, it returns an empty range.
	
		
			
			
				// ptr distance from old to new stack (newbase - oldbase)
			old stack
			
				sghi is the highest sudog.elem on the stack.
		
			
			func adjustctxt(gp *g, adjinfo *adjustinfo)
			func adjustdefers(gp *g, adjinfo *adjustinfo)
			func adjustframe(frame *stkframe, adjinfo *adjustinfo)
			func adjustpanics(gp *g, adjinfo *adjustinfo)
			func adjustpointer(adjinfo *adjustinfo, vpp unsafe.Pointer)
			func adjustpointers(scanp unsafe.Pointer, bv *bitvector, adjinfo *adjustinfo, f funcInfo)
			func adjustsudogs(gp *g, adjinfo *adjustinfo)
			func syncadjustsudogs(gp *g, used uintptr, adjinfo *adjustinfo) uintptr
	
		ancestorInfo records details of where a goroutine was started.
		
			
			
				// goroutine id of this goroutine; original goroutine possibly dead
			
				// pc of go statement that created this goroutine
			
				// pcs from the stack of this goroutine
		
			
			func saveAncestors(callergp *g) *[]ancestorInfo
		
			
			func printAncestorTraceback(ancestor ancestorInfo)
	
		arenaHint is a hint for where to grow the heap arenas. See
		mheap_.arenaHints.
		
			
			addr uintptr
			down bool
			next *arenaHint
	
		
			
			
				l1 returns the "l1" portion of an arenaIdx.
				
				Marked nosplit because it's called by spanOf and other nosplit
				functions.
			
				l2 returns the "l2" portion of an arenaIdx.
				
				Marked nosplit because it's called by spanOf and other nosplit funcs.
				functions.
		
			
			func arenaIndex(p uintptr) arenaIdx
		
			
			func arenaBase(i arenaIdx) uintptr
	
		atomicHeadTailIndex is an atomically-accessed headTailIndex.
		
			
			u atomic.Uint64
		
			
			
				cas atomically compares-and-swaps a headTailIndex value.
			
				decHead atomically decrements the head of a headTailIndex.
			
				incHead atomically increments the head of a headTailIndex.
			
				incTail atomically increments the tail of a headTailIndex.
			
				load atomically reads a headTailIndex value.
			
				reset clears the headTailIndex to (0, 0).
	
		atomicMSpanPointer is an atomic.Pointer[mspan]. Can't use generics because it's NotInHeap.
		
			
			p atomic.UnsafePointer
		
			
				Load returns the *mspan.
			
				Store stores an *mspan.
	
		atomicOffAddr is like offAddr, but operations on it are atomic.
		It also contains operations to be able to store marked addresses
		to ensure that they're not overridden until they've been seen.
		
			
			
				a contains the offset address, unlike offAddr.
		
			
				Clear attempts to store minOffAddr in atomicOffAddr. It may fail
				if a marked value is placed in the box in the meanwhile.
			
				Load returns the address in the box as a virtual address. It also
				returns if the value was marked or not.
			
				StoreMarked stores addr but first converted to the offset address
				space and then negated.
			
				StoreMin stores addr if it's less than the current value in the
				offset address space if the current value is not marked.
			
				StoreUnmark attempts to unmark the value in atomicOffAddr and
				replace it with newAddr. markedAddr must be a marked address
				returned by Load. This function will not store newAddr if the
				box no longer contains markedAddr.
	
		atomicScavChunkData is an atomic wrapper around a scavChunkData
		that stores it in its packed form.
		
			
			value atomic.Uint64
		
			
			
				load loads and unpacks a scavChunkData.
			
				store packs and writes a new scavChunkData. store must be serialized
				with other calls to store.
	
		atomicSpanSetSpinePointer is an atomically-accessed spanSetSpinePointer.
		
		It has the same semantics as atomic.UnsafePointer.
		
			
			a atomic.UnsafePointer
		
			
				Loads the spanSetSpinePointer and returns it.
				
				It has the same semantics as atomic.UnsafePointer.
			
				Stores the spanSetSpinePointer.
				
				It has the same semantics as [atomic.UnsafePointer].
	
		A bitCursor is a simple cursor to memory to which we
		can write a set of bits.
		
			
			
				// cursor points to bit n of region
			
				// base of region
		
			
			( bitCursor) offset(cnt uintptr) bitCursor
			
				Write to b cnt bits starting at bit 0 of data.
				Requires cnt>0.
		
			
			func buildGCMask(t *_type, dst bitCursor)
	
		Information from the compiler about the layout of stack frames.
		Note: this type must agree with reflect.bitVector.
		
			
			bytedata *uint8
			
				// # of bits
		
			
			
				ptrbit returns the i'th bit in bv.
				ptrbit is less efficient than iterating directly over bitvector bits,
				and should only be used in non-performance-critical code.
				See adjustpointers for an example of a high-efficiency walk of a bitvector.
		
			
			func makeheapobjbv(p uintptr, size uintptr) bitvector
			func progToPointerMask(prog *byte, size uintptr) bitvector
			func stackmapdata(stkmap *stackmap, n int32) bitvector
		
			
			func adjustpointers(scanp unsafe.Pointer, bv *bitvector, adjinfo *adjustinfo, f funcInfo)
			func dumpbv(cbv *bitvector, offset uintptr)
			func dumpfields(bv bitvector)
			func dumpobj(obj unsafe.Pointer, size uintptr, bv bitvector)
	
		A blockRecord is the bucket data for a bucket of type blockProfile,
		which is used in blocking and mutex profiles.
		
			
			count float64
			cycles int64
	
		A boundsError represents an indexing or slicing operation gone wrong.
		
			
			code boundsErrorCode
			
				Values in an index or slice expression can be signed or unsigned.
				That means we'd need 65 bits to encode all possible indexes, from -2^63 to 2^64-1.
				Instead, we keep track of whether x should be interpreted as signed or unsigned.
				y is known to be nonnegative and to fit in an int.
			x int64
			y int
		
			( boundsError) Error() string
			( boundsError) RuntimeError()
		
			 boundsError : Error
			 boundsError : error
	
		
			
			const boundsConvert
			const boundsIndex
			const boundsSlice3Acap
			const boundsSlice3Alen
			const boundsSlice3B
			const boundsSlice3C
			const boundsSliceAcap
			const boundsSliceAlen
			const boundsSliceB
	
		A bucket holds per-call-stack profiling information.
		The representation is a bit sleazy, inherited from C.
		This struct defines the bucket header. It is followed in
		memory by the stack words and then the actual record
		data, either a memRecord or a blockRecord.
		
		Per-call-stack profiling information.
		Lookup by hashing call stack into a linked-list hash table.
		
		None of the fields in this bucket header are modified after
		creation, including its next and allnext links.
		
		No heap pointers.
		
			
			allnext *bucket
			hash uintptr
			next *bucket
			nstk uintptr
			size uintptr
			
				// memBucket or blockBucket (includes mutexProfile)
		
			
			
				bp returns the blockRecord associated with the blockProfile bucket b.
			
				mp returns the memRecord associated with the memProfile bucket b.
			
				stk returns the slice in b holding the stack. The caller can assume that the
				backing array is immutable.
		
			
			func newBucket(typ bucketType, nstk int) *bucket
			func stkbucket(typ bucketType, size uintptr, stk []uintptr, alloc bool) *bucket
		
			
			func dumpmemprof_callback(b *bucket, nstk uintptr, pstk *uintptr, size, allocs, frees uintptr)
			func mProf_Free(b *bucket, size uintptr)
			func setprofilebucket(p unsafe.Pointer, b *bucket)
	
		
			
			func newBucket(typ bucketType, nstk int) *bucket
			func saveblockevent(cycles, rate int64, skip int, which bucketType)
			func saveBlockEventStack(cycles, rate int64, stk []uintptr, which bucketType)
			func stkbucket(typ bucketType, size uintptr, stk []uintptr, alloc bool) *bucket
		
			
			const blockProfile
			const memProfile
			const mutexProfile
	 type buckhashArray ([...])	
		Addresses collected in a cgo backtrace when crashing.
		Length must match arg.Max in x_cgo_callers in runtime/cgo/gcc_traceback.c.
		
			
			func printCgoTraceback(callers *cgoCallers)
		
			
			  var sigprofCallers
	
		cgoSymbolizerArg is the type passed to cgoSymbolizer.
		
			
			data uintptr
			entry uintptr
			file *byte
			funcName *byte
			lineno uintptr
			more uintptr
			pc uintptr
		
			
			func callCgoSymbolizer(arg *cgoSymbolizerArg)
			func printOneCgoTraceback(pc uintptr, commitFrame func() (pr, stop bool), arg *cgoSymbolizerArg) bool
	
		cgoTracebackArg is the type passed to cgoTraceback.
		
			
			buf *uintptr
			context uintptr
			max uintptr
			sigContext uintptr
	
		A checkmarksMap stores the GC marks in "checkmarks" mode. It is a
		per-arena bitmap with a bit for every word in the arena. The mark
		is stored on the bit corresponding to the first word of the marked
		allocation.
		
			
			b [1048576]uint8
	
		
			
			
				// size of args region
			
				Information passed up from the callee frame about
				the layout of the outargs region.
				// where the arguments start in the frame
			
				// if args.n >= 0, pointer map of args region
			
				// depth in call stack (0 == most recent)
			
				// callee sp
		
			
			func dumpframe(s *stkframe, child *childInfo)
	
		Global chunk index.
		
		Represents an index into the leaf level of the radix tree.
		Similar to arenaIndex, except instead of arenas, it divides the address
		space into chunks.
		
			
			
				l1 returns the index into the first level of (*pageAlloc).chunks.
			
				l2 returns the index into the second level of (*pageAlloc).chunks.
		
			
			func chunkIndex(p uintptr) chunkIdx
		
			
			func chunkBase(ci chunkIdx) uintptr
	
		consistentHeapStats represents a set of various memory statistics
		whose updates must be viewed completely to get a consistent
		state of the world.
		
		To write updates to memory stats use the acquire and release
		methods. To obtain a consistent global snapshot of these statistics,
		use read.
		
			
			
				gen represents the current index into which writers
				are writing, and can take on the value of 0, 1, or 2.
			
				noPLock is intended to provide mutual exclusion for updating
				stats when no P is available. It does not block other writers
				with a P, only other writers without a P and the reader. Because
				stats are usually updated when a P is available, contention on
				this lock should be minimal.
			
				stats is a ring buffer of heapStatsDelta values.
				Writers always atomically update the delta at index gen.
				
				Readers operate by rotating gen (0 -> 1 -> 2 -> 0 -> ...)
				and synchronizing with writers by observing each P's
				statsSeq field. If the reader observes a P not writing,
				it can be sure that it will pick up the new gen value the
				next time it writes.
				
				The reader then takes responsibility by clearing space
				in the ring buffer for the next reader to rotate gen to
				that space (i.e. it merges in values from index (gen-2) mod 3
				to index (gen-1) mod 3, then clears the former).
				
				Note that this means only one reader can be reading at a time.
				There is no way for readers to synchronize.
				
				This process is why we need a ring buffer of size 3 instead
				of 2: one is for the writers, one contains the most recent
				data, and the last one is clear so writers can begin writing
				to it the moment gen is updated.
		
			
			
				acquire returns a heapStatsDelta to be updated. In effect,
				it acquires the shard for writing. release must be called
				as soon as the relevant deltas are updated.
				
				The returned heapStatsDelta must be updated atomically.
				
				The caller's P must not change between acquire and
				release. This also means that the caller should not
				acquire a P or release its P in between. A P also must
				not acquire a given consistentHeapStats if it hasn't
				yet released it.
				
				nosplit because a stack growth in this function could
				lead to a stack allocation that could reenter the
				function.
			
				read takes a globally consistent snapshot of m
				and puts the aggregated value in out. Even though out is a
				heapStatsDelta, the resulting values should be complete and
				valid statistic values.
				
				Not safe to call concurrently. The world must be stopped
				or metricsSema must be held.
			
				release indicates that the writer is done modifying
				the delta. The value returned by the corresponding
				acquire must no longer be accessed or modified after
				release is called.
				
				The caller's P must not change between acquire and
				release. This also means that the caller should not
				acquire a P or release its P in between.
				
				nosplit because a stack growth in this function could
				lead to a stack allocation that causes another acquire
				before this operation has completed.
			
				unsafeClear clears the shard.
				
				Unsafe because the world must be stopped and values should
				be donated elsewhere before clearing.
			
				unsafeRead aggregates the delta for this shard into out.
				
				Unsafe because it does so without any synchronization. The
				world must be stopped.
	
		A coro represents extra concurrency without extra parallelism,
		as would be needed for a coroutine implementation.
		The coro does not represent a specific coroutine, only the ability
		to do coroutine-style control transfers.
		It can be thought of as like a special channel that always has
		a goroutine blocked on it. If another goroutine calls coroswitch(c),
		the caller becomes the goroutine blocked in c, and the goroutine
		formerly blocked in c starts running.
		These switches continue until a call to coroexit(c),
		which ends the use of the coro by releasing the blocked
		goroutine in c and exiting the current goroutine.
		
		Coros are heap allocated and garbage collected, so that user code
		can hold a pointer to a coro without causing potential dangling
		pointer errors.
		
			
			f func(*coro)
			gp guintptr
			
				// mp's external LockOSThread counter at coro creation time.
			
				// mp's internal lockOSThread counter at coro creation time.
			
				State for validating thread-lock interactions.
		
			
			func newcoro(f func(*coro)) *coro
		
			
			func coroexit(c *coro)
			func coroswitch(c *coro)
	
		
			
			
				extra holds extra stacks accumulated in addNonGo
				corresponding to profiling signals arriving on
				non-Go-created threads. Those stacks are written
				to log the next time a normal Go thread gets the
				signal handler.
				Assuming the stacks are 2 words each (we don't get
				a full traceback from those threads), plus one word
				size for framing, 100 Hz profiling would generate
				300 words per second.
				Hopefully a normal Go thread will get the profiling
				signal at least once every few seconds.
			lock mutex
			
				// profile events written here
			
				// count of frames lost because of being in atomic64 on mips/arm; updated racily
			
				// count of frames lost because extra is full
			numExtra int
			
				// profiling is on
		
			
			
				add adds the stack trace to the profile.
				It is called from signal handlers and other limited environments
				and cannot allocate memory or acquire locks that might be
				held at the time of the signal, nor can it use substantial amounts
				of stack.
			
				addExtra adds the "extra" profiling events,
				queued by addNonGo, to the profile log.
				addExtra is called either from a signal handler on a Go thread
				or from an ordinary goroutine; either way it can use stack
				and has a g. The world may be stopped, though.
			
				addNonGo adds the non-Go stack trace to the profile.
				It is called from a non-Go thread, so we cannot use much stack at all,
				nor do anything that needs a g or an m.
				In particular, we can't call cpuprof.log.write.
				Instead, we copy the stack into cpuprof.extra,
				which will be drained the next time a Go thread
				gets the signal handling event.
		
			
			  var cpuprof
	
		
			
				// GC assists
			
				// GC dedicated mark workers + pauses
			
				// GC idle mark workers
			
				// GC pauses (all GOMAXPROCS, even if just 1 is running)
			GCTotalTime int64
			
				// Time Ps spent in _Pidle.
			
				// background scavenger
			
				// scavenge assists
			ScavengeTotalTime int64
			
				// GOMAXPROCS * (monotonic wall clock time elapsed)
			
				// Time Ps spent in _Prunning or _Psyscall that's not any of the above.
		
			
			
				accumulate takes a cpuStats and adds in the current state of all GC CPU
				counters.
				
				gcMarkPhase indicates that we're in the mark phase and that certain counter
				values should be used.
			
				accumulateGCPauseTime add dt*stwProcs to the GC CPU pause time stats. dt should be
				the actual time spent paused, for orthogonality. maxProcs should be GOMAXPROCS,
				not work.stwprocs, since this number must be comparable to a total time computed
				from GOMAXPROCS.
	
		cpuStatsAggregate represents CPU stats obtained from the runtime
		acquired together to avoid skew and inconsistencies.
		
			
				// GC assists
			
				// GC dedicated mark workers + pauses
			
				// GC idle mark workers
			
				// GC pauses (all GOMAXPROCS, even if just 1 is running)
			cpuStats.GCTotalTime int64
			
				// Time Ps spent in _Pidle.
			
				// background scavenger
			
				// scavenge assists
			cpuStats.ScavengeTotalTime int64
			
				// GOMAXPROCS * (monotonic wall clock time elapsed)
			
				// Time Ps spent in _Prunning or _Psyscall that's not any of the above.
			
			cpuStats cpuStats
		
			
			
				accumulate takes a cpuStats and adds in the current state of all GC CPU
				counters.
				
				gcMarkPhase indicates that we're in the mark phase and that certain counter
				values should be used.
			
				accumulateGCPauseTime add dt*stwProcs to the GC CPU pause time stats. dt should be
				the actual time spent paused, for orthogonality. maxProcs should be GOMAXPROCS,
				not work.stwprocs, since this number must be comparable to a total time computed
				from GOMAXPROCS.
			
				compute populates the cpuStatsAggregate with values from the runtime.
	
		
			
			
				// for variables that can be changed during execution
			
				// default value (ideally zero)
			name string
			
				// for variables that can only be set at startup
	
		
			
			
				begin and end are the positions in the log of the beginning
				and end of the log data, modulo len(data).
			data *debugLogBuf
			
				begin and end are the positions in the log of the beginning
				and end of the log data, modulo len(data).
			
				tick and nano are the current time base at begin.
			
				tick and nano are the current time base at begin.
		
			
			(*debugLogReader) header() (end, tick, nano uint64, p int)
			(*debugLogReader) peek() (tick uint64)
			(*debugLogReader) printVal() bool
			(*debugLogReader) readUint16LEAt(pos uint64) uint16
			(*debugLogReader) readUint64LEAt(pos uint64) uint64
			(*debugLogReader) skip() uint64
			(*debugLogReader) uvarint() uint64
			(*debugLogReader) varint() int64
	
		A debugLogWriter is a ring buffer of binary debug log records.
		
		A log record consists of a 2-byte framing header and a sequence of
		fields. The framing header gives the size of the record as a little
		endian 16-bit value. Each field starts with a byte indicating its
		type, followed by type-specific data. If the size in the framing
		header is 0, it's a sync record consisting of two little endian
		64-bit values giving a new time base.
		
		Because this is a ring buffer, new records will eventually
		overwrite old records. Hence, it maintains a reader that consumes
		the log as it gets overwritten. That reader state is where an
		actual log reader would start.
		
			
			
				buf is a scratch buffer for encoding. This is here to
				reduce stack usage.
			data debugLogBuf
			
				tick and nano are the time bases from the most recently
				written sync record.
			
				r is a reader that consumes records as they get overwritten
				by the writer. It also acts as the initial reader state
				when printing the log.
			
				tick and nano are the time bases from the most recently
				written sync record.
			write uint64
		
			
			(*debugLogWriter) byte(x byte)
			(*debugLogWriter) bytes(x []byte)
			(*debugLogWriter) ensure(n uint64)
			(*debugLogWriter) uvarint(u uint64)
			(*debugLogWriter) varint(x int64)
			(*debugLogWriter) writeFrameAt(pos, size uint64) bool
			(*debugLogWriter) writeSync(tick, nano uint64)
			(*debugLogWriter) writeUint64LE(x uint64)
	
		
			
			( dlogger) b(x bool) dloggerFake
			( dlogger) end()
			( dlogger) hex(x uint64) dloggerFake
			( dlogger) i(x int) dloggerFake
			( dlogger) i16(x int16) dloggerFake
			( dlogger) i32(x int32) dloggerFake
			( dlogger) i64(x int64) dloggerFake
			( dlogger) i8(x int8) dloggerFake
			( dlogger) p(x any) dloggerFake
			( dlogger) pc(x uintptr) dloggerFake
			( dlogger) s(x string) dloggerFake
			( dlogger) traceback(x []uintptr) dloggerFake
			( dlogger) u(x uint) dloggerFake
			( dlogger) u16(x uint16) dloggerFake
			( dlogger) u32(x uint32) dloggerFake
			( dlogger) u64(x uint64) dloggerFake
			( dlogger) u8(x uint8) dloggerFake
			( dlogger) uptr(x uintptr) dloggerFake
		
			
			func dlog() dlogger
			func dlog1() dloggerFake
			func dlogFake() dloggerFake
	
		A dloggerFake is a no-op implementation of dlogger.
		
			
			( dloggerFake) b(x bool) dloggerFake
			( dloggerFake) end()
			( dloggerFake) hex(x uint64) dloggerFake
			( dloggerFake) i(x int) dloggerFake
			( dloggerFake) i16(x int16) dloggerFake
			( dloggerFake) i32(x int32) dloggerFake
			( dloggerFake) i64(x int64) dloggerFake
			( dloggerFake) i8(x int8) dloggerFake
			( dloggerFake) p(x any) dloggerFake
			( dloggerFake) pc(x uintptr) dloggerFake
			( dloggerFake) s(x string) dloggerFake
			( dloggerFake) traceback(x []uintptr) dloggerFake
			( dloggerFake) u(x uint) dloggerFake
			( dloggerFake) u16(x uint16) dloggerFake
			( dloggerFake) u32(x uint32) dloggerFake
			( dloggerFake) u64(x uint64) dloggerFake
			( dloggerFake) u8(x uint8) dloggerFake
			( dloggerFake) uptr(x uintptr) dloggerFake
		
			
			func dlog() dlogger
			func dlog1() dloggerFake
			func dlogFake() dloggerFake
	
		A dloggerImpl writes to the debug log.
		
		To obtain a dloggerImpl, call dlog(). When done with the dloggerImpl, call
		end().
		
			
			
				allLink is the next dlogger in the allDloggers list.
			
				owned indicates that this dlogger is owned by an M. This is
				accessed atomically.
			w debugLogWriter
		
			
			(*dloggerImpl) b(x bool) *dloggerImpl
			(*dloggerImpl) end()
			(*dloggerImpl) hex(x uint64) *dloggerImpl
			(*dloggerImpl) i(x int) *dloggerImpl
			(*dloggerImpl) i16(x int16) *dloggerImpl
			(*dloggerImpl) i32(x int32) *dloggerImpl
			(*dloggerImpl) i64(x int64) *dloggerImpl
			(*dloggerImpl) i8(x int8) *dloggerImpl
			(*dloggerImpl) p(x any) *dloggerImpl
			(*dloggerImpl) pc(x uintptr) *dloggerImpl
			(*dloggerImpl) s(x string) *dloggerImpl
			(*dloggerImpl) traceback(x []uintptr) *dloggerImpl
			(*dloggerImpl) u(x uint) *dloggerImpl
			(*dloggerImpl) u16(x uint16) *dloggerImpl
			(*dloggerImpl) u32(x uint32) *dloggerImpl
			(*dloggerImpl) u64(x uint64) *dloggerImpl
			(*dloggerImpl) u8(x uint8) *dloggerImpl
			(*dloggerImpl) uptr(x uintptr) *dloggerImpl
		
			
			func dlogImpl() *dloggerImpl
			func getCachedDlogger() *dloggerImpl
		
			
			func putCachedDlogger(l *dloggerImpl) bool
		
			
			  var allDloggers *dloggerImpl
	
		
			
			_type *_type
			data unsafe.Pointer
		
			
			func efaceOf(ep *any) *eface
		
			
			func printeface(e eface)
			func reflect_ifaceE2I(inter *interfacetype, e eface, dst *iface)
			func reflectlite_ifaceE2I(inter *interfacetype, e eface, dst *iface)
	
		
			
			
				// Dynamic entry type
			
				// Integer value
	
		
			
			
				// ELF header size in bytes
			
				// Entry point virtual address
			
				// Processor-specific flags
			
				// Magic number and other info
			
				// Architecture
			
				// Program header table entry size
			
				// Program header table entry count
			
				// Program header table file offset
			
				// Section header table entry size
			
				// Section header table entry count
			
				// Section header table file offset
			
				// Section header string table index
			
				// Object file type
			
				// Object file version
		
			
			func vdsoInitFromSysinfoEhdr(info *vdsoInfo, hdr *elfEhdr)
	
		
			
			
				// Segment alignment
			
				// Segment size in file
			
				// Segment flags
			
				// Segment size in memory
			
				// Segment file offset
			
				// Segment physical address
			
				// Segment type
			
				// Segment virtual address
	
		
			
			
				// Section virtual addr at execution
			
				// Section alignment
			
				// Entry size if section holds table
			
				// Section flags
			
				// Additional section information
			
				// Link to another section
			
				// Section name (string tbl index)
			
				// Section file offset
			
				// Section size in bytes
			
				// Section type
	
		
			
			
				// Version or dependency names
			
				// Offset in bytes to next verdaux entry
	
		
			
			
				// Offset in bytes to verdaux array
			
				// Number of associated aux entries
			
				// Version information
			
				// Version name hash value
			
				// Version Index
			
				// Offset in bytes to next verdef entry
			
				// Version revision
	
		
			
			
				// memory address where the error occurred
			
				// error message
		
			
				Addr returns the memory address where a fault occurred.
				The address provided is best-effort.
				The veracity of the result may depend on the platform.
				Errors providing this method will only be returned as
				a result of using [runtime/debug.SetPanicOnFault].
			( errorAddressString) Error() string
			( errorAddressString) RuntimeError()
		
			 errorAddressString : Error
			 errorAddressString : error
	
		An errorString represents a runtime error described by a single string.
		
			( errorString) Error() string
			( errorString) RuntimeError()
		
			 errorString : Error
			 errorString : error
	
		NOTE: Layout known to queuefinalizer.
		
			
			
				// ptr to object (may be a heap pointer)
			
				// type of first argument of fn
			
				// function to call (may be a heap pointer)
			
				// bytes of return values from fn
			
				// type of ptr to object (may be a heap pointer)
	
		finblock is an array of finalizers to be executed. finblocks are
		arranged in a linked list for the finalizer queue.
		
		finblock is allocated from non-GC'd memory, so any heap pointers
		must be specially handled. GC currently assumes that the finalizer
		queue does not grow during marking (but it can shrink).
		
			
			alllink *finblock
			cnt uint32
			fin [101]finalizer
			next *finblock
		
			
			  var allfin *finblock
			  var finc *finblock
			  var finq *finblock
	
		findfuncbucket is an array of these structures.
		Each bucket represents 4096 bytes of the text segment.
		Each subbucket represents 256 bytes of the text segment.
		To find a function given a pc, locate the bucket and subbucket for
		that pc. Add together the idx and subbucket value to obtain a
		function index. Then scan the functab array starting at that
		index to find the target function.
		This table uses 20 bytes for every 4096 bytes of code, or ~0.5% overhead.
		
			
			idx uint32
			subbuckets [16]byte
	
		fixalloc is a simple free-list allocator for fixed size objects.
		Malloc uses a FixAlloc wrapped around sysAlloc to manage its
		mcache and mspan objects.
		
		Memory returned by fixalloc.alloc is zeroed by default, but the
		caller may take responsibility for zeroing allocations by setting
		the zero flag to false. This is only safe if the memory never
		contains heap pointers.
		
		The caller is responsible for locking around FixAlloc calls.
		Callers can keep state in the object but the first word is
		smashed by freeing and reallocating.
		
		Consider marking fixalloc'd types not in heap by embedding
		internal/runtime/sys.NotInHeap.
		
			
			arg unsafe.Pointer
			
				// use uintptr instead of unsafe.Pointer to avoid write barriers
			
				// called first time p is returned
			
				// in-use bytes now
			list *mlink
			
				// size of new chunks in bytes
			
				// bytes remaining in current chunk
			size uintptr
			stat *sysMemStat
			
				// zero allocations
		
			
			(*fixalloc) alloc() unsafe.Pointer
			(*fixalloc) free(p unsafe.Pointer)
			
				Initialize f to allocate objects of the given size,
				using the allocator to obtain chunks of memory.
	
		
			
			_st [8]fpxreg
			_xmm [16]xmmreg
			cwd uint16
			fop uint16
			ftw uint16
			mxcr_mask uint32
			mxcsr uint32
			padding [24]uint32
			rdp uint64
			rip uint64
			swd uint16
	
		
			
			_st [8]fpxreg1
			_xmm [16]xmmreg1
			cwd uint16
			fop uint16
			ftw uint16
			mxcr_mask uint32
			mxcsr uint32
			padding [24]uint32
			rdp uint64
			rip uint64
			swd uint16
	
		
			
				// Only in static data
			
			_func *_func
			
				// in/out args size
			
				// runtime.cutab offset of this function's CU
			
				// offset of start of a deferreturn call instruction from entry, if any.
			
				// start pc, as offset from moduledata.text/pcHeader.textStart
			_func.flag abi.FuncFlag
			
				// set for certain special runtime functions
			
				// function name, as index into moduledata.funcnametab.
			
				// must be last, must end on a uint32-aligned boundary
			_func.npcdata uint32
			_func.pcfile uint32
			_func.pcln uint32
			_func.pcsp uint32
			
				// line number of start of function (func keyword/TEXT directive)
			datap *moduledata
		
			
			( funcInfo) _Func() *Func
			
				entry returns the entry PC for f.
				
				entry should be an internal detail,
				but widely used packages access it using linkname.
				Notable members of the hall of shame include:
				  - github.com/phuslu/log
				
				Do not remove or change the type signature.
				See go.dev/issue/67401.
			( funcInfo) funcInfo() funcInfo
			
				isInlined reports whether f should be re-interpreted as a *funcinl.
			( funcInfo) srcFunc() srcFunc
			( funcInfo) valid() bool
		
			
			func findfunc(pc uintptr) funcInfo
			func (*Func).funcInfo() funcInfo
		
			
			func adjustpointers(scanp unsafe.Pointer, bv *bitvector, adjinfo *adjustinfo, f funcInfo)
			func badFuncInfoEntry(funcInfo) uintptr
			func funcdata(f funcInfo, i uint8) unsafe.Pointer
			func funcfile(f funcInfo, fileno int32) string
			func funcline(f funcInfo, targetpc uintptr) (file string, line int32)
			func funcline1(f funcInfo, targetpc uintptr, strict bool) (file string, line int32)
			func funcMaxSPDelta(f funcInfo) int32
			func funcname(f funcInfo) string
			func funcpkgpath(f funcInfo) string
			func funcspdelta(f funcInfo, targetpc uintptr) int32
			func newInlineUnwinder(f funcInfo, pc uintptr) (inlineUnwinder, inlineFrame)
			func pcdatastart(f funcInfo, table uint32) uint32
			func pcdatavalue(f funcInfo, table uint32, targetpc uintptr) int32
			func pcdatavalue1(f funcInfo, table uint32, targetpc uintptr, strict bool) int32
			func pcdatavalue2(f funcInfo, table uint32, targetpc uintptr) (int32, uintptr)
			func pcvalue(f funcInfo, off uint32, targetpc uintptr, strict bool) (int32, uintptr)
			func printAncestorTracebackFuncInfo(f funcInfo, pc uintptr)
			func printArgs(f funcInfo, argp unsafe.Pointer, pc uintptr)
			func printcreatedby1(f funcInfo, pc uintptr, goid uint64)
	
		Pseudo-Func that is returned for PCs that occur in inlined code.
		A *Func can be either a *_func or a *funcinl, and they are distinguished
		by the first uintptr.
		
		TODO(austin): Can we merge this with inlinedCall?
		
			
			
				// entry of the real (the "outermost") frame
			file string
			line int32
			name string
			
				// set to ^0 to distinguish from _func
			startLine int32
	
		
			
			fn uintptr
		
			
			func addCleanup(p unsafe.Pointer, f *funcval) uint64
			func addfinalizer(p unsafe.Pointer, f *funcval, nret uintptr, fint *_type, ot *ptrtype) bool
			func dumpfinalizer(obj unsafe.Pointer, fn *funcval, fint *_type, ot *ptrtype)
			func finq_callback(fn *funcval, obj unsafe.Pointer, nret uintptr, fint *_type, ot *ptrtype)
			func gostartcallfn(gobuf *gobuf, fv *funcval)
			func newproc(fn *funcval)
			func newproc1(fn *funcval, callergp *g, callerpc uintptr, parked bool, waitreason waitReason) *g
			func queuefinalizer(p unsafe.Pointer, fn *funcval, nret uintptr, fint *_type, ot *ptrtype)
	
		
			
			
				// innermost defer
			
				// innermost panic - offset known to liblink
			
				activeStackChans indicates that there are unlocked channels
				pointing into this goroutine's stack. If true, stack
				copying needs to acquire channel locks to protect these
				areas of the stack.
			
				// ancestor information goroutine(s) that created this goroutine (only used if debug.tracebackancestors)
			
				asyncSafePoint is set if g is stopped at an asynchronous
				safe point. This means there are frames on the stack
				without precise pointer information.
			atomicstatus atomic.Uint32
			
				// cgo traceback context
			
				// argument during coroutine transfers
			
				// argument to coroswitch_m
			fipsIndicator uint8
			
				gcAssistBytes is this G's GC assist credit in terms of
				bytes allocated. If this is positive, then the G has credit
				to allocate gcAssistBytes bytes without assisting. If this
				is negative, then the G must correct this by performing
				scan work. We track this in bytes to make it fast to update
				and check for debt in the malloc hot path. The assist ratio
				determines how this corresponds to scan work debt.
			
				// g has scanned stack; protected by _Gscan bit in status
			goid uint64
			
				// pc of go statement that created this goroutine
			
				goroutineProfiled indicates the status of this goroutine's stack for the
				current in-progress goroutine profile
			
				inMarkAssist indicates whether the goroutine is in mark assist.
				Used by the execution tracer.
			
				// profiler labels
			lockedm muintptr
			
				// current m; offset known to arm liblink
			
				// whether disable callback from C
			
				// panic (instead of crash) on unexpected fault address
			
				param is a generic pointer parameter field used to pass
				values in particular contexts where other storage for the
				parameter would be difficult to find. It is currently used
				in four ways:
				1. When a channel operation wakes up a blocked goroutine, it sets param to
				   point to the sudog of the completed blocking operation.
				2. By gcAssistAlloc1 to signal back to its caller that the goroutine completed
				   the GC cycle. It is unsafe to do so in any other way, because the goroutine's
				   stack may have moved in the meantime.
				3. By debugCallWrap to pass parameters to a new goroutine because allocating a
				   closure in the runtime is forbidden.
				4. When a panic is recovered and control returns to the respective frame,
				   param may point to a savedOpenDeferState.
			
				// goid of goroutine that created this goroutine
			
				parkingOnChan indicates that the goroutine is about to
				park on a chansend or chanrecv. Used to signal an unsafe point
				for stack shrinking.
			
				// preemption signal, duplicates stackguard0 = stackpreempt
			
				// shrink stack at synchronous safe point
			
				// transition to _Gpreempted on preemption; otherwise, just deschedule
			racectx uintptr
			
				// ignore race detection events
			
				// the amount of time spent runnable, cleared when running, only used when tracking
			sched gobuf
			schedlink guintptr
			
				// are we participating in a select and did someone win the race?
			sig uint32
			sigcode0 uintptr
			sigcode1 uintptr
			sigpc uintptr
			
				// when to sleep until
			
				Stack parameters.
				stack describes the actual stack memory: [stack.lo, stack.hi).
				stackguard0 is the stack pointer compared in the Go stack growth prologue.
				It is stack.lo+StackGuard normally, but can be StackPreempt to trigger a preemption.
				stackguard1 is the stack pointer compared in the //go:systemstack stack growth prologue.
				It is stack.lo+StackGuard on g0 and gsignal stacks.
				It is ~0 on other goroutine stacks, to trigger a call to morestackc (and crash).
				// offset known to runtime/cgo
			
				// sigprof/scang lock; TODO: fold in to atomicstatus
			
				// offset known to liblink
			
				// offset known to liblink
			
				// pc of goroutine function
			
				// expected sp at top of stack, to check in traceback
			syncGroup *synctestGroup
			
				// if status==Gsyscall, syscallbp = sched.bp to use in fpTraceback
			
				// if status==Gsyscall, syscallpc = sched.pc to use during gc
			
				// if status==Gsyscall, syscallsp = sched.sp to use during gc
			
				// must not split stack
			
				// cached timer for time.Sleep
			
				Per-G tracer state.
			
				// whether we're tracking this G for sched latency statistics
			
				// used to decide whether to track this G
			
				// timestamp of when the G last started being tracked
			
				// sudog structures this g is waiting on (that have a valid elem ptr); in lock order
			
				// if status==Gwaiting
			
				// approx time when the g become blocked
			writebuf []byte
		
			
			(*g) guintptr() guintptr
		
			
			func allGsSnapshot() []*g
			func atomicAllG() (**g, uintptr)
			func atomicAllGIndex(ptr **g, i uintptr) *g
			func beforeIdle(int64, int64) (*g, bool)
			func checkIdleGCNoP() (*p, *g)
			func fatalsignal(sig uint32, c *sigctxt, gp *g, mp *m) *g
			func findRunnable() (gp *g, inheritTime, tryWakeP bool)
			func getg() *g
			func gfget(pp *p) *g
			func globrunqget(pp *p, max int32) *g
			func malg(stacksize int32) *g
			func netpollunblock(pd *pollDesc, mode int32, ioready bool, delta *int32) *g
			func newproc1(fn *funcval, callergp *g, callerpc uintptr, parked bool, waitreason waitReason) *g
			func runqget(pp *p) (gp *g, inheritTime bool)
			func runqsteal(pp, p2 *p, stealRunNextG bool) *g
			func sigFetchG(c *sigctxt) *g
			func stealWork(now int64) (gp *g, inheritTime bool, rnow, pollUntil int64, newWork bool)
			func traceReader() *g
			func traceReaderAvailable() *g
			func wakefing() *g
		
			
			func adjustctxt(gp *g, adjinfo *adjustinfo)
			func adjustdefers(gp *g, adjinfo *adjustinfo)
			func adjustpanics(gp *g, adjinfo *adjustinfo)
			func adjustsudogs(gp *g, adjinfo *adjustinfo)
			func allgadd(gp *g)
			func atomicAllGIndex(ptr **g, i uintptr) *g
			func casfrom_Gscanstatus(gp *g, oldval, newval uint32)
			func casGFromPreempted(gp *g, old, new uint32) bool
			func casgstatus(gp *g, oldval, newval uint32)
			func casGToPreemptScan(gp *g, old, new uint32)
			func casGToWaiting(gp *g, old uint32, reason waitReason)
			func casGToWaitingForSuspendG(gp *g, old uint32, reason waitReason)
			func castogscanstatus(gp *g, oldval, newval uint32) bool
			func chanparkcommit(gp *g, chanLock unsafe.Pointer) bool
			func copystack(gp *g, newsize uintptr)
			func coroswitch_m(gp *g)
			func dopanic_m(gp *g, pc, sp uintptr) bool
			func doRecordGoroutineProfile(gp1 *g, pcbuf []uintptr)
			func doSigPreempt(gp *g, ctxt *sigctxt)
			func dumpgoroutine(gp *g)
			func dumpgstatus(gp *g)
			func execute(gp *g, inheritTime bool)
			func exitsyscall0(gp *g)
			func fatalsignal(sig uint32, c *sigctxt, gp *g, mp *m) *g
			func finalizercommit(gp *g, lock unsafe.Pointer) bool
			func findsghi(gp *g, stk stack) uintptr
			func gcallers(gp *g, skip int, pcbuf []uintptr) int
			func gcAssistAlloc(gp *g)
			func gcAssistAlloc1(gp *g, scanWork int64)
			func gdestroy(gp *g)
			func gfput(pp *p, gp *g)
			func globrunqput(gp *g)
			func globrunqputhead(gp *g)
			func goexit0(gp *g)
			func gopreempt_m(gp *g)
			func goready(gp *g, traceskip int)
			func goroutineheader(gp *g)
			func gosched_m(gp *g)
			func goschedguarded_m(gp *g)
			func goschedImpl(gp *g, preempted bool)
			func goyield_m(gp *g)
			func isAsyncSafePoint(gp *g, pc, sp, lr uintptr) (bool, uintptr)
			func isShrinkStackSafe(gp *g) bool
			func isSystemGoroutine(gp *g, fixed bool) bool
			func netpollblockcommit(gp *g, gpp unsafe.Pointer) bool
			func netpollgoready(gp *g, traceskip int)
			func newproc1(fn *funcval, callergp *g, callerpc uintptr, parked bool, waitreason waitReason) *g
			func park_m(gp *g)
			func parkunlock_c(gp *g, lock unsafe.Pointer) bool
			func popDefer(gp *g)
			func preemptPark(gp *g)
			func printcreatedby(gp *g)
			func raceacquireg(gp *g, addr unsafe.Pointer)
			func racereleaseacquireg(gp *g, addr unsafe.Pointer)
			func racereleaseg(gp *g, addr unsafe.Pointer)
			func racereleasemergeg(gp *g, addr unsafe.Pointer)
			func readgstatus(gp *g) uint32
			func ready(gp *g, traceskip int, next bool)
			func recovery(gp *g)
			func resetForSleep(gp *g, _ unsafe.Pointer) bool
			func runqput(pp *p, gp *g, next bool)
			func runqputslow(pp *p, gp *g, h, t uint32) bool
			func saveAncestors(callergp *g) *[]ancestorInfo
			func saveg(pc, sp uintptr, gp *g, r *profilerecord.StackRecord, pcbuf []uintptr)
			func scanstack(gp *g, gcw *gcWork) int64
			func schedEnabled(gp *g) bool
			func selparkcommit(gp *g, _ unsafe.Pointer) bool
			func setg(gg *g)
			func setGNoWB(gp **g, new *g)
			func setGNoWB(gp **g, new *g)
			func shouldPushSigpanic(gp *g, pc, lr uintptr) bool
			func showframe(sf srcFunc, gp *g, firstFrame bool, calleeID abi.FuncID) bool
			func shrinkstack(gp *g)
			func sighandler(sig uint32, info *siginfo, ctxt unsafe.Pointer, gp *g)
			func sigprof(pc, sp, lr uintptr, gp *g, mp *m)
			func startlockedm(gp *g)
			func suspendG(gp *g) suspendGState
			func syncadjustsudogs(gp *g, used uintptr, adjinfo *adjustinfo) uintptr
			func synctestidle_c(gp *g, _ unsafe.Pointer) bool
			func synctestwait_c(gp *g, _ unsafe.Pointer) bool
			func traceback(pc, sp, lr uintptr, gp *g)
			func traceback1(pc, sp, lr uintptr, gp *g, flags unwindFlags)
			func tracebackothers(me *g)
			func tracebacktrap(pc, sp, lr uintptr, gp *g)
			func traceCPUSample(gp *g, mp *m, pp *p, stk []uintptr)
			func traceStack(skip int, gp *g, gen uintptr) uint64
			func tryRecordGoroutineProfile(gp1 *g, pcbuf []uintptr, yield func())
			func tryRecordGoroutineProfileWB(gp1 *g)
			func wantAsyncPreempt(gp *g) bool
		
			
			  var fing *g
			  var g0
			  var gcrash
	
		gcBgMarkWorkerNode is an entry in the gcBgMarkWorkerPool. It points to a single
		gcBgMarkWorker goroutine.
		
			
			
				The g of this worker.
			
				Release this m on park. This is used to communicate with the unlock
				function, which cannot access the G's stack. It is unused outside of
				gcBgMarkWorker().
			
				Unused workers are managed in a lock-free stack. This field must be first.
	
		gcBits is an alloc/mark bitmap. This is always used as gcBits.x.
		
			
			x uint8
		
			
			
				bitp returns a pointer to the byte containing bit n and a mask for
				selecting that bit from *bytep.
			
				bytep returns a pointer to the n'th byte of b.
		
			
			func newAllocBits(nelems uintptr) *gcBits
			func newMarkBits(nelems uintptr) *gcBits
	
		
			
			bits [65520]gcBits
			
				gcBitsHeader // side step recursive type bug (issue 14620) by including fields by hand.
				// free is the index into bits of the next free byte; read/write atomically
			next *gcBitsArena
		
			
			
				tryAlloc allocates from b or returns nil if b does not have enough room.
				This is safe to call concurrently.
		
			
			func newArenaMayUnlock() *gcBitsArena
	
		
			
			
				// free is the index into bits of the next free byte.
			
				// *gcBits triggers recursive type bug. (issue 14620)
	
		
			
			
				assistBytesPerWork is 1/assistWorkPerByte.
				
				Note that because this is read and written independently
				from assistWorkPerByte users may notice a skew between
				the two values, and such a state should be safe.
			
				assistTime is the nanoseconds spent in mutator assists
				during this cycle. This is updated atomically, and must also
				be updated atomically even during a STW, because it is read
				by sysmon. Updates occur in bounded batches, since it is both
				written and read throughout the cycle.
			
				assistWorkPerByte is the ratio of scan work to allocated
				bytes that should be performed by mutator assists. This is
				computed at the beginning of each cycle and updated every
				time heapScan is updated.
			
				bgScanCredit is the scan work credit accumulated by the concurrent
				background scan. This credit is accumulated by the background scan
				and stolen by mutator assists.  Updates occur in bounded batches,
				since it is both written and read throughout the cycle.
			
				consMark is the estimated per-CPU consMark ratio for the application.
				
				It represents the ratio between the application's allocation
				rate, as bytes allocated per CPU-time, and the GC's scan rate,
				as bytes scanned per CPU-time.
				The units of this ratio are (B / cpu-ns) / (B / cpu-ns).
				
				At a high level, this value is computed as the bytes of memory
				allocated (cons) per unit of scan work completed (mark) in a GC
				cycle, divided by the CPU time spent on each activity.
				
				Updated at the end of each GC cycle, in endCycle.
			
				dedicatedMarkTime is the nanoseconds spent in dedicated mark workers
				during this cycle. This is updated at the end of the concurrent mark
				phase.
			
				dedicatedMarkWorkersNeeded is the number of dedicated mark workers
				that need to be started. This is computed at the beginning of each
				cycle and decremented as dedicated mark workers get started.
			
				fractionalMarkTime is the nanoseconds spent in the fractional mark
				worker during this cycle. This is updated throughout the cycle and
				will be up-to-date if the fractional mark worker is not currently
				running.
			
				fractionalUtilizationGoal is the fraction of wall clock
				time that should be spent in the fractional mark worker on
				each P that isn't running a dedicated worker.
				
				For example, if the utilization goal is 25% and there are
				no dedicated workers, this will be 0.25. If the goal is
				25%, there is one dedicated worker, and GOMAXPROCS is 5,
				this will be 0.05 to make up the missing 5%.
				
				If this is zero, no fractional workers are needed.
			
				Initialized from GOGC. GOGC=off means no GC.
			
				gcPercentHeapGoal is the goal heapLive for when next GC ends derived
				from gcPercent.
				
				Set to ^uint64(0) if gcPercent is disabled.
			
				globalsScan is the total amount of global variable space
				that is scannable.
			globalsScanWork atomic.Int64
			
				// bytes not in any span, but not released to the OS
			
				These memory stats are effectively duplicates of fields from
				memstats.heapStats but are updated atomically or with the world
				stopped and don't provide the same consistency guarantees.
				
				Because the runtime is responsible for managing a memory limit, it's
				useful to couple these stats more tightly to the gcController, which
				is intimately connected to how that memory limit is maintained.
				// bytes in mSpanInUse spans
			
				heapLive is the number of bytes considered live by the GC.
				That is: retained by the most recent GC plus allocated
				since then. heapLive ≤ memstats.totalAlloc-memstats.totalFree, since
				heapAlloc includes unmarked objects that have not yet been swept (and
				hence goes up as we allocate and down as we sweep) while heapLive
				excludes these objects (and hence only goes up between GCs).
				
				To reduce contention, this is updated only when obtaining a span
				from an mcentral and at this point it counts all of the unallocated
				slots in that span (which will be allocated before that mcache
				obtains another span from that mcentral). Hence, it slightly
				overestimates the "true" live heap size. It's better to overestimate
				than to underestimate because 1) this triggers the GC earlier than
				necessary rather than potentially too late and 2) this leads to a
				conservative GC rate rather than a GC rate that is potentially too
				low.
				
				Whenever this is updated, call traceHeapAlloc() and
				this gcControllerState's revise() method.
			
				heapMarked is the number of bytes marked by the previous
				GC. After mark termination, heapLive == heapMarked, but
				unlike heapLive, heapMarked does not change until the
				next mark termination.
			
				heapMinimum is the minimum heap size at which to trigger GC.
				For small heaps, this overrides the usual GOGC*live set rule.
				
				When there is a very small live set but a lot of allocation, simply
				collecting when the heap reaches GOGC*live results in many GC
				cycles and high total per-GC overhead. This minimum amortizes this
				per-GC overhead while keeping the heap reasonably small.
				
				During initialization this is set to 4MB*GOGC/100. In the case of
				GOGC==0, this will set heapMinimum to 0, resulting in constant
				collection even when the heap size is small, which is useful for
				debugging.
			
				// bytes released to the OS
			
				heapScan is the number of bytes of "scannable" heap. This is the
				live heap (as counted by heapLive), but omitting no-scan objects and
				no-scan tails of objects.
				
				This value is fixed at the start of a GC cycle. It represents the
				maximum scannable heap.
			
				heapScanWork is the total heap scan work performed this cycle.
				stackScanWork is the total stack scan work performed this cycle.
				globalsScanWork is the total globals scan work performed this cycle.
				
				These are updated atomically during the cycle. Updates occur in
				bounded batches, since they are both written and read
				throughout the cycle. At the end of the cycle, heapScanWork is how
				much of the retained heap is scannable.
				
				Currently these are measured in bytes. For most uses, this is an
				opaque unit of work, but for estimation the definition is important.
				
				Note that stackScanWork includes only stack space scanned, not all
				of the allocated stack.
			
				idleMarkTime is the nanoseconds spent in idle marking during this
				cycle. This is updated throughout the cycle.
			
				idleMarkWorkers is two packed int32 values in a single uint64.
				These two values are always updated simultaneously.
				
				The bottom int32 is the current number of idle mark workers executing.
				
				The top int32 is the maximum number of idle mark workers allowed to
				execute concurrently. Normally, this number is just gomaxprocs. However,
				during periodic GC cycles it is set to 0 because the system is idle
				anyway; there's no need to go full blast on all of GOMAXPROCS.
				
				The maximum number of idle mark workers is used to prevent new workers
				from starting, but it is not a hard maximum. It is possible (but
				exceedingly rare) for the current number of idle mark workers to
				transiently exceed the maximum. This could happen if the maximum changes
				just after a GC ends, and an M with no P.
				
				Note that if we have no dedicated mark workers, we set this value to
				1 in this case we only have fractional GC workers which aren't scheduled
				strictly enough to ensure GC progress. As a result, idle-priority mark
				workers are vital to GC progress in these situations.
				
				For example, consider a situation in which goroutines block on the GC
				(such as via runtime.GOMAXPROCS) and only fractional mark workers are
				scheduled (e.g. GOMAXPROCS=1). Without idle-priority mark workers, the
				last running M might skip scheduling a fractional mark worker if its
				utilization goal is met, such that once it goes to sleep (because there's
				nothing to do), there will be nothing else to spin up a new M for the
				fractional worker in the future, stalling GC progress and causing a
				deadlock. However, idle-priority workers will *always* run when there is
				nothing left to do, ensuring the GC makes progress.
				
				See github.com/golang/go/issues/44163 for more details.
			
				lastConsMark is the computed cons/mark value for the previous 4 GC
				cycles. Note that this is *not* the last value of consMark, but the
				measured cons/mark value in endCycle.
			
				lastHeapGoal is the value of heapGoal at the moment the last GC
				ended. Note that this is distinct from the last value heapGoal had,
				because it could change if e.g. gcPercent changes.
				
				Read and written with the world stopped or with mheap_.lock held.
			
				lastHeapScan is the number of bytes of heap that were scanned
				last GC cycle. It is the same as heapMarked, but only
				includes the "scannable" parts of objects.
				
				Updated when the world is stopped.
			
				lastStackScan is the number of bytes of stack that were scanned
				last GC cycle.
			
				// total virtual memory in the Ready state (see mem.go).
			
				markStartTime is the absolute start time in nanoseconds
				that assists and background mark workers started.
			
				maxStackScan is the amount of allocated goroutine stack space in
				use by goroutines.
				
				This number tracks allocated goroutine stack space rather than used
				goroutine stack space (i.e. what is actually scanned) because used
				goroutine stack space is much harder to measure cheaply. By using
				allocated space, we make an overestimate; this is OK, it's better
				to conservatively overcount than undercount.
			
				memoryLimit is the soft memory limit in bytes.
				
				Initialized from GOMEMLIMIT. GOMEMLIMIT=off is equivalent to MaxInt64
				which means no soft memory limit in practice.
				
				This is an int64 instead of a uint64 to more easily maintain parity with
				the SetMemoryLimit API, which sets a maximum at MaxInt64. This value
				should never be negative.
			
				runway is the amount of runway in heap bytes allocated by the
				application that we want to give the GC once it starts.
				
				This is computed from consMark during mark termination.
			stackScanWork atomic.Int64
			
				sweepDistMinTrigger is the minimum trigger to ensure a minimum
				sweep distance.
				
				This bound is also special because it applies to both the trigger
				*and* the goal (all other trigger bounds must be based *on* the goal).
				
				It is computed ahead of time, at commit time. The theory is that,
				absent a sudden change to a parameter like gcPercent, the trigger
				will be chosen to always give the sweeper enough headroom. However,
				such a change might dramatically and suddenly move up the trigger,
				in which case we need to ensure the sweeper still has enough headroom.
			
				test indicates that this is a test-only copy of gcControllerState.
			
				// total bytes allocated
			
				// total bytes freed
			
				triggered is the point at which the current GC cycle actually triggered.
				Only valid during the mark phase of a GC cycle, otherwise set to ^uint64(0).
				
				Updated while the world is stopped.
		
			
			(*gcControllerState) addGlobals(amount int64)
			
				addIdleMarkWorker attempts to add a new idle mark worker.
				
				If this returns true, the caller must become an idle mark worker unless
				there's no background mark worker goroutines in the pool. This case is
				harmless because there are already background mark workers running.
				If this returns false, the caller must NOT become an idle mark worker.
				
				nosplit because it may be called without a P.
			(*gcControllerState) addScannableStack(pp *p, amount int64)
			
				commit recomputes all pacing parameters needed to derive the
				trigger and the heap goal. Namely, the gcPercent-based heap goal,
				and the amount of runway we want to give the GC this cycle.
				
				This can be called any time. If GC is the in the middle of a
				concurrent phase, it will adjust the pacing of that phase.
				
				isSweepDone should be the result of calling isSweepDone(),
				unless we're testing or we know we're executing during a GC cycle.
				
				This depends on gcPercent, gcController.heapMarked, and
				gcController.heapLive. These must be up to date.
				
				Callers must call gcControllerState.revise after calling this
				function if the GC is enabled.
				
				mheap_.lock must be held or the world must be stopped.
			
				endCycle computes the consMark estimate for the next cycle.
				userForced indicates whether the current GC cycle was forced
				by the application.
			
				enlistWorker encourages another dedicated mark worker to start on
				another P if there are spare worker slots. It is used by putfull
				when more work is made available.
			
				findRunnableGCWorker returns a background mark worker for pp if it
				should be run. This must only be called when gcBlackenEnabled != 0.
			
				heapGoal returns the current heap goal.
			
				heapGoalInternal is the implementation of heapGoal which returns additional
				information that is necessary for computing the trigger.
				
				The returned minTrigger is always <= goal.
			(*gcControllerState) init(gcPercent int32, memoryLimit int64)
			
				markWorkerStop must be called whenever a mark worker stops executing.
				
				It updates mark work accounting in the controller by a duration of
				work in nanoseconds and other bookkeeping.
				
				Safe to execute at any time.
			
				memoryLimitHeapGoal returns a heap goal derived from memoryLimit.
			
				needIdleMarkWorker is a hint as to whether another idle mark worker is needed.
				
				The caller must still call addIdleMarkWorker to become one. This is mainly
				useful for a quick check before an expensive operation.
				
				nosplit because it may be called without a P.
			
				removeIdleMarkWorker must be called when a new idle mark worker stops executing.
			
				resetLive sets up the controller state for the next mark phase after the end
				of the previous one. Must be called after endCycle and before commit, before
				the world is started.
				
				The world must be stopped.
			
				revise updates the assist ratio during the GC cycle to account for
				improved estimates. This should be called whenever gcController.heapScan,
				gcController.heapLive, or if any inputs to gcController.heapGoal are
				updated. It is safe to call concurrently, but it may race with other
				calls to revise.
				
				The result of this race is that the two assist ratio values may not line
				up or may be stale. In practice this is OK because the assist ratio
				moves slowly throughout a GC cycle, and the assist ratio is a best-effort
				heuristic anyway. Furthermore, no part of the heuristic depends on
				the two assist ratio values being exact reciprocals of one another, since
				the two values are used to convert values from different sources.
				
				The worst case result of this raciness is that we may miss a larger shift
				in the ratio (say, if we decide to pace more aggressively against the
				hard heap goal) but even this "hard goal" is best-effort (see #40460).
				The dedicated GC should ensure we don't exceed the hard goal by too much
				in the rare case we do exceed it.
				
				It should only be called when gcBlackenEnabled != 0 (because this
				is when assists are enabled and the necessary statistics are
				available).
			
				setGCPercent updates gcPercent. commit must be called after.
				Returns the old value of gcPercent.
				
				The world must be stopped, or mheap_.lock must be held.
			
				setMaxIdleMarkWorkers sets the maximum number of idle mark workers allowed.
				
				This method is optimistic in that it does not wait for the number of
				idle mark workers to reduce to max before returning; it assumes the workers
				will deschedule themselves.
			
				setMemoryLimit updates memoryLimit. commit must be called after
				Returns the old value of memoryLimit.
				
				The world must be stopped, or mheap_.lock must be held.
			
				startCycle resets the GC controller's state and computes estimates
				for a new GC cycle. The caller must hold worldsema and the world
				must be stopped.
			
				trigger returns the current point at which a GC should trigger along with
				the heap goal.
				
				The returned value may be compared against heapLive to determine whether
				the GC should trigger. Thus, the GC trigger condition should be (but may
				not be, in the case of small movements for efficiency) checked whenever
				the heap goal may change.
			(*gcControllerState) update(dHeapLive, dHeapScan int64)
		
			
			  var gcController
	
		
			
			
				assistTimePool is the accumulated assist time since the last update.
			bucket struct{fill, capacity uint64}
			enabled atomic.Bool
			
				gcEnabled is an internal copy of gcBlackenEnabled that determines
				whether the limiter tracks total assist time.
				
				gcBlackenEnabled isn't used directly so as to keep this structure
				unit-testable.
			
				idleMarkTimePool is the accumulated idle mark time since the last update.
			
				idleTimePool is the accumulated time Ps spent on the idle list since the last update.
			
				lastEnabledCycle is the GC cycle that last had the limiter enabled.
			
				lastUpdate is the nanotime timestamp of the last time update was called.
				
				Updated under lock, but may be read concurrently.
			lock atomic.Uint32
			
				nprocs is an internal copy of gomaxprocs, used to determine total available
				CPU time.
				
				gomaxprocs isn't used directly so as to keep this structure unit-testable.
			
				overflow is the cumulative amount of GC CPU time that we tried to fill the
				bucket with but exceeded its capacity.
			
				test indicates whether this instance of the struct was made for testing purposes.
			
				transitioning is true when the GC is in a STW and transitioning between
				the mark and sweep phases.
		
			
			
				accumulate adds time to the bucket and signals whether the limiter is enabled.
				
				This is an internal function that deals just with the bucket. Prefer update.
				l.lock must be held.
			
				addAssistTime notifies the limiter of additional assist time. It will be
				included in the next update.
			
				addIdleTime notifies the limiter of additional time a P spent on the idle list. It will be
				subtracted from the total CPU time in the next update.
			
				finishGCTransition notifies the limiter that the GC transition is complete
				and releases ownership of it. It also accumulates STW time in the bucket.
				now must be the timestamp from the end of the STW pause.
			
				limiting returns true if the CPU limiter is currently enabled, meaning the Go GC
				should take action to limit CPU utilization.
				
				It is safe to call concurrently with other operations.
			
				needUpdate returns true if the limiter's maximum update period has been
				exceeded, and so would benefit from an update.
			
				resetCapacity updates the capacity based on GOMAXPROCS. Must not be called
				while the GC is enabled.
				
				It is safe to call concurrently with other operations.
			
				startGCTransition notifies the limiter of a GC transition.
				
				This call takes ownership of the limiter and disables all other means of
				updating the limiter. Release ownership by calling finishGCTransition.
				
				It is safe to call concurrently with other operations.
			
				tryLock attempts to lock l. Returns true on success.
			
				unlock releases the lock on l. Must be called if tryLock returns true.
			
				update updates the bucket given runtime-specific information. now is the
				current monotonic time in nanoseconds.
				
				This is safe to call concurrently with other operations, except *GCTransition.
			
				updateLocked is the implementation of update. l.lock must be held.
		
			
			  var gcCPULimiter
	
		
			
			func gcDrain(gcw *gcWork, flags gcDrainFlags)
		
			
			const gcDrainFlushBgCredit
			const gcDrainFractional
			const gcDrainIdle
			const gcDrainUntilPreempt
	
		A gclink is a node in a linked list of blocks, like mlink,
		but it is opaque to the garbage collector.
		The GC does not trace the pointers during collection,
		and the compiler does not emit write barriers for assignments
		of gclinkptr values. Code should store references to gclinks
		as gclinkptr, not as *gclink.
		
			
			next gclinkptr
	
		A gclinkptr is a pointer to a gclink, but it is opaque
		to the garbage collector.
		
			
			
				ptr returns the *gclink form of p.
				The result should be used for accessing fields, not stored
				in other data structures.
		
			
			func nextFreeFast(s *mspan) gclinkptr
			func stackpoolalloc(order uint8) gclinkptr
		
			
			func stackpoolfree(x gclinkptr, order uint8)
	
		gcMarkWorkerMode represents the mode that a concurrent mark worker
		should operate in.
		
		Concurrent marking happens through four different mechanisms. One
		is mutator assists, which happen in response to allocations and are
		not scheduled. The other three are variations in the per-P mark
		workers and are distinguished by gcMarkWorkerMode.
		
			
			const gcMarkWorkerDedicatedMode
			const gcMarkWorkerFractionalMode
			const gcMarkWorkerIdleMode
			const gcMarkWorkerNotWorker
	
		gcMode indicates how concurrent a GC cycle should be.
		
			
			func gcSweep(mode gcMode) bool
		
			
			const gcBackgroundMode
			const gcForceBlockMode
			const gcForceMode
	
		gcStatsAggregate represents various GC stats obtained from the runtime
		acquired together to avoid skew and inconsistencies.
		
			
			globalsScan uint64
			heapScan uint64
			stackScan uint64
			totalScan uint64
		
			
			
				compute populates the gcStatsAggregate with values from the runtime.
	
		A gcTrigger is a predicate for starting a GC cycle. Specifically,
		it is an exit condition for the _GCoff phase.
		
			
			kind gcTriggerKind
			
				// gcTriggerCycle: cycle number to start
			
				// gcTriggerTime: current time
		
			
			
				test reports whether the trigger condition is satisfied, meaning
				that the exit condition for the _GCoff phase has been met. The exit
				condition should be tested when allocating.
		
			
			func gcStart(trigger gcTrigger)
	
		
			
			const gcTriggerCycle
			const gcTriggerHeap
			const gcTriggerTime
	
		A gcWork provides the interface to produce and consume work for the
		garbage collector.
		
		A gcWork can be used on the stack as follows:
		
			(preemption must be disabled)
			gcw := &getg().m.p.ptr().gcw
			.. call gcw.put() to produce and gcw.tryGet() to consume ..
		
		It's important that any use of gcWork during the mark phase prevent
		the garbage collector from transitioning to mark termination since
		gcWork may locally hold GC work buffers. This can be done by
		disabling preemption (systemstack or acquirem).
		
			
			
				Bytes marked (blackened) on this gcWork. This is aggregated
				into work.bytesMarked by dispose.
			
				flushedWork indicates that a non-empty work buffer was
				flushed to the global work list since the last gcMarkDone
				termination check. Specifically, this indicates that this
				gcWork may have communicated work to another gcWork.
			
				Heap scan work performed on this gcWork. This is aggregated into
				gcController by dispose and may also be flushed by callers.
				Other types of scan work are flushed immediately.
			
				wbuf1 and wbuf2 are the primary and secondary work buffers.
				
				This can be thought of as a stack of both work buffers'
				pointers concatenated. When we pop the last pointer, we
				shift the stack up by one work buffer by bringing in a new
				full buffer and discarding an empty one. When we fill both
				buffers, we shift the stack down by one work buffer by
				bringing in a new empty buffer and discarding a full one.
				This way we have one buffer's worth of hysteresis, which
				amortizes the cost of getting or putting a work buffer over
				at least one buffer of work and reduces contention on the
				global work lists.
				
				wbuf1 is always the buffer we're currently pushing to and
				popping from and wbuf2 is the buffer that will be discarded
				next.
				
				Invariant: Both wbuf1 and wbuf2 are nil or neither are.
			
				wbuf1 and wbuf2 are the primary and secondary work buffers.
				
				This can be thought of as a stack of both work buffers'
				pointers concatenated. When we pop the last pointer, we
				shift the stack up by one work buffer by bringing in a new
				full buffer and discarding an empty one. When we fill both
				buffers, we shift the stack down by one work buffer by
				bringing in a new empty buffer and discarding a full one.
				This way we have one buffer's worth of hysteresis, which
				amortizes the cost of getting or putting a work buffer over
				at least one buffer of work and reduces contention on the
				global work lists.
				
				wbuf1 is always the buffer we're currently pushing to and
				popping from and wbuf2 is the buffer that will be discarded
				next.
				
				Invariant: Both wbuf1 and wbuf2 are nil or neither are.
		
			
			
				balance moves some work that's cached in this gcWork back on the
				global queue.
			
				dispose returns any cached pointers to the global queue.
				The buffers are being put on the full queue so that the
				write barriers will not simply reacquire them before the
				GC can inspect them. This helps reduce the mutator's
				ability to hide pointers during the concurrent mark phase.
			
				empty reports whether w has no mark work available.
			(*gcWork) init()
			
				put enqueues a pointer for the garbage collector to trace.
				obj must point to the beginning of a heap object or an oblet.
			
				putBatch performs a put on every pointer in obj. See put for
				constraints on these pointers.
			
				putFast does a put and reports whether it can be done quickly
				otherwise it returns false and the caller needs to call put.
			
				tryGet dequeues a pointer for the garbage collector to trace.
				
				If there are no pointers remaining in this gcWork or in the global
				queue, tryGet returns 0.  Note that there may still be pointers in
				other gcWork instances or other caches.
			
				tryGetFast dequeues a pointer for the garbage collector to trace
				if one is readily available. Otherwise it returns 0 and
				the caller is expected to call tryGet().
		
			
			func gcDrain(gcw *gcWork, flags gcDrainFlags)
			func gcDrainMarkWorkerDedicated(gcw *gcWork, untilPreempt bool)
			func gcDrainMarkWorkerFractional(gcw *gcWork)
			func gcDrainMarkWorkerIdle(gcw *gcWork)
			func gcDrainN(gcw *gcWork, scanWork int64) int64
			func greyobject(obj, base, off uintptr, span *mspan, gcw *gcWork, objIndex uintptr)
			func markroot(gcw *gcWork, i uint32, flushBgCredit bool) int64
			func markrootBlock(b0, n0 uintptr, ptrmask0 *uint8, gcw *gcWork, shard int) int64
			func markrootSpans(gcw *gcWork, shard int)
			func scanblock(b0, n0 uintptr, ptrmask *uint8, gcw *gcWork, stk *stackScanState)
			func scanConservative(b, n uintptr, ptrmask *uint8, gcw *gcWork, state *stackScanState)
			func scanframeworker(frame *stkframe, state *stackScanState, gcw *gcWork)
			func scanobject(b uintptr, gcw *gcWork)
			func scanstack(gp *g, gcw *gcWork) int64
	
		A gList is a list of Gs linked through g.schedlink. A G can only be
		on one gQueue or gList at a time.
		
			
			head guintptr
		
			
			
				empty reports whether l is empty.
			
				pop removes and returns the head of l. If l is empty, it returns nil.
			
				push adds gp to the head of l.
			
				pushAll prepends all Gs in q to l.
		
			
			func netpoll(delay int64) (gList, int32)
		
			
			func injectglist(glist *gList)
			func netpollready(toRun *gList, pd *pollDesc, mode int32) int32
	
		
			
			
				// for framepointer-enabled architectures
			ctxt unsafe.Pointer
			g guintptr
			lr uintptr
			pc uintptr
			ret uintptr
			
				The offsets of sp, pc, and g are known to (hard-coded in) libmach.
				
				ctxt is unusual with respect to GC: it may be a
				heap-allocated funcval, so GC needs to track it, but it
				needs to be set and cleared from assembly, where it's
				difficult to have write barriers. However, ctxt is really a
				saved, live register, and we only ever exchange it between
				the real register and the gobuf. Hence, we treat it as a
				root during stack scanning, which means assembly that saves
				and restores it doesn't need write barriers. It's still
				typed as a pointer so that any other writes from Go get
				write barriers.
		
			
			func gogo(buf *gobuf)
			func gostartcall(buf *gobuf, fn, ctxt unsafe.Pointer)
			func gostartcallfn(gobuf *gobuf, fv *funcval)
	
		A godebugInc provides access to internal/godebug's IncNonDefault function
		for a given GODEBUG setting.
		Calls before internal/godebug registers itself are dropped on the floor.
		
			
			inc atomic.Pointer[func()]
			name string
		
			(*godebugInc) IncNonDefault()
		
			
			  var panicnil *godebugInc
	
		goroutineProfileState indicates the status of a goroutine's stack for the
		current in-progress goroutine profile. Goroutines' stacks are initially
		"Absent" from the profile, and end up "Satisfied" by the time the profile is
		complete. While a goroutine's stack is being captured, its
		goroutineProfileState will be "InProgress" and it will not be able to run
		until the capture completes and the state moves to "Satisfied".
		
		Some goroutines (the finalizer goroutine, which at various times can be
		either a "system" or a "user" goroutine, and the goroutine that is
		coordinating the profile, any goroutines created during the profile) move
		directly to the "Satisfied" state.
		
			
			const goroutineProfileAbsent
			const goroutineProfileInProgress
			const goroutineProfileSatisfied
	
		
			
			noCopy atomic.noCopy
			value uint32
		
			(*goroutineProfileStateHolder) CompareAndSwap(old, new goroutineProfileState) bool
			(*goroutineProfileStateHolder) Load() goroutineProfileState
			(*goroutineProfileStateHolder) Store(value goroutineProfileState)
	
		A gQueue is a dequeue of Gs linked through g.schedlink. A G can only
		be on one gQueue or gList at a time.
		
			
			head guintptr
			tail guintptr
		
			
			
				empty reports whether q is empty.
			
				pop removes and returns the head of queue q. It returns nil if
				q is empty.
			
				popList takes all Gs in q and returns them as a gList.
			
				push adds gp to the head of q.
			
				pushBack adds gp to the tail of q.
			
				pushBackAll adds all Gs in q2 to the tail of q. After this q2 must
				not be used.
		
			
			func runqdrain(pp *p) (drainQ gQueue, n uint32)
		
			
			func globrunqputbatch(batch *gQueue, n int32)
			func runqputbatch(pp *p, q *gQueue, qsize int)
	
		gsignalStack saves the fields of the gsignal stack changed by
		setGsignalStack.
		
			
			stack stack
			stackguard0 uintptr
			stackguard1 uintptr
			stktopsp uintptr
		
			
			func adjustSignalStack(sig uint32, mp *m, gsigStack *gsignalStack) bool
			func restoreGsignalStack(st *gsignalStack)
			func setGsignalStack(st *stackt, old *gsignalStack)
	
		gTraceState is per-G state for the tracer.
		
			
			traceSchedResourceState traceSchedResourceState
			
				seq is the sequence counter for this scheduling resource's events.
				The purpose of the sequence counter is to establish a partial order between
				events that don't obviously happen serially (same M) in the stream ofevents.
				
				There are two of these so that we can reset the counter on each generation.
				This saves space in the resulting trace by keeping the counter small and allows
				GoStatus and GoCreate events to omit a sequence number (implicitly 0).
			
				statusTraced indicates whether a status event was traced for this resource
				a particular generation.
				
				There are 3 of these because when transitioning across generations, traceAdvance
				needs to be able to reliably observe whether a status was traced for the previous
				generation, while we need to clear the value for the next generation.
		
			
			
				acquireStatus acquires the right to emit a Status event for the scheduling resource.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				nextSeq returns the next sequence number for the resource.
			
				readyNextGen readies r for the generation following gen.
			
				reset resets the gTraceState for a new goroutine.
			
				setStatusTraced indicates that the resource's status was already traced, for example
				when a goroutine is created.
			
				statusWasTraced returns true if the sched resource's status was already acquired for tracing.
	
		A guintptr holds a goroutine pointer, but typed as a uintptr
		to bypass write barriers. It is used in the Gobuf goroutine state
		and in scheduling lists that are manipulated without a P.
		
		The Gobuf.g goroutine pointer is almost always updated by assembly code.
		In one of the few places it is updated by Go code - func save - it must be
		treated as a uintptr to avoid a write barrier being emitted at a bad time.
		Instead of figuring out how to emit the write barriers missing in the
		assembly manipulation, we change the type of the field to uintptr,
		so that it does not require write barriers at all.
		
		Goroutine structs are published in the allg list and never freed.
		That will keep the goroutine structs from being collected.
		There is never a time that Gobuf.g's contain the only references
		to a goroutine: the publishing of the goroutine in allg comes first.
		Goroutine pointers are also kept in non-GC-visible places like TLS,
		so I can't see them ever moving. If we did want to start moving data
		in the GC, we'd need to allocate the goroutine structs from an
		alternate arena. Using guintptr doesn't make that problem any worse.
		Note that pollDesc.rg, pollDesc.wg also store g in uintptr form,
		so they would need to be updated too if g's start moving.
		
			
			(*guintptr) cas(old, new guintptr) bool
			( guintptr) ptr() *g
			(*guintptr) set(g *g)
		
			
			func runqgrab(pp *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32
	
		
			
			
				// points to an array of dataqsiz elements
			closed uint32
			
				// size of the circular queue
			elemsize uint16
			
				// element type
			
				lock protects all fields in hchan, as well as several
				fields in sudogs blocked on this channel.
				
				Do not change another G's status while holding this lock
				(in particular, do not ready a G), as this can deadlock
				with stack shrinking.
			
				// total data in the queue
			
				// list of recv waiters
			
				// receive index
			
				// list of send waiters
			
				// send index
			
				// true if created in a synctest bubble
			
				// timer feeding this chan
		
			
			(*hchan) raceaddr() unsafe.Pointer
			(*hchan) sortkey() uintptr
		
			
			func makechan(t *chantype, size int) *hchan
			func makechan64(t *chantype, size int64) *hchan
			func reflect_makechan(t *chantype, size int) *hchan
		
			
			func blockTimerChan(c *hchan)
			func chanbuf(c *hchan, i uint) unsafe.Pointer
			func chancap(c *hchan) int
			func chanlen(c *hchan) int
			func chanrecv(c *hchan, ep unsafe.Pointer, block bool) (selected, received bool)
			func chanrecv1(c *hchan, elem unsafe.Pointer)
			func chanrecv2(c *hchan, elem unsafe.Pointer) (received bool)
			func chansend(c *hchan, ep unsafe.Pointer, block bool, callerpc uintptr) bool
			func chansend1(c *hchan, elem unsafe.Pointer)
			func closechan(c *hchan)
			func empty(c *hchan) bool
			func full(c *hchan) bool
			func newTimer(when, period int64, f func(arg any, seq uintptr, delay int64), arg any, c *hchan) *timeTimer
			func racenotify(c *hchan, idx uint, sg *sudog)
			func racesync(c *hchan, sg *sudog)
			func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
			func reflect_chancap(c *hchan) int
			func reflect_chanclose(c *hchan)
			func reflect_chanlen(c *hchan) int
			func reflect_chanrecv(c *hchan, nb bool, elem unsafe.Pointer) (selected bool, received bool)
			func reflect_chansend(c *hchan, elem unsafe.Pointer, nb bool) (selected bool)
			func reflectlite_chanlen(c *hchan) int
			func selectnbrecv(elem unsafe.Pointer, c *hchan) (selected, received bool)
			func selectnbsend(c *hchan, elem unsafe.Pointer) (selected bool)
			func send(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
			func timerchandrain(c *hchan) bool
			func unblockTimerChan(c *hchan)
	
		headTailIndex represents a combined 32-bit head and 32-bit tail
		of a queue into a single 64-bit value.
		
			
			
				head returns the head of a headTailIndex value.
			
				split splits the headTailIndex value into its parts.
			
				tail returns the tail of a headTailIndex value.
		
			
			func makeHeadTailIndex(head, tail uint32) headTailIndex
	
		A heapArena stores metadata for a heap arena. heapArenas are stored
		outside of the Go heap and accessed via the mheap_.arenas index.
		
			
			
				checkmarks stores the debug.gccheckmark state. It is only
				used if debug.gccheckmark > 0.
			
				pageInUse is a bitmap that indicates which spans are in
				state mSpanInUse. This bitmap is indexed by page number,
				but only the bit corresponding to the first page in each
				span is used.
				
				Reads and writes are atomic.
			
				pageMarks is a bitmap that indicates which spans have any
				marked objects on them. Like pageInUse, only the bit
				corresponding to the first page in each span is used.
				
				Writes are done atomically during marking. Reads are
				non-atomic and lock-free since they only occur during
				sweeping (and hence never race with writes).
				
				This is used to quickly find whole spans that can be freed.
				
				TODO(austin): It would be nice if this was uint64 for
				faster scanning, but we don't have 64-bit atomic bit
				operations.
			
				pageSpecials is a bitmap that indicates which spans have
				specials (finalizers or other). Like pageInUse, only the bit
				corresponding to the first page in each span is used.
				
				Writes are done atomically whenever a special is added to
				a span and whenever the last special is removed from a span.
				Reads are done atomically to find spans containing specials
				during marking.
			
				spans maps from virtual address page ID within this arena to *mspan.
				For allocated spans, their pages map to the span itself.
				For free spans, only the lowest and highest pages map to the span itself.
				Internal pages map to an arbitrary span.
				For pages that have never been allocated, spans entries are nil.
				
				Modifications are protected by mheap.lock. Reads can be
				performed without locking, but ONLY from indexes that are
				known to contain in-use or stack spans. This means there
				must not be a safe-point between establishing that an
				address is live and looking it up in the spans array.
			
				zeroedBase marks the first byte of the first page in this
				arena which hasn't been used yet and is therefore already
				zero. zeroedBase is relative to the arena base.
				Increases monotonically until it hits heapArenaBytes.
				
				This field is sufficient to determine if an allocation
				needs to be zeroed because the page allocator follows an
				address-ordered first-fit policy.
				
				Read atomically and written with an atomic CAS.
		
			
			func pageIndexOf(p uintptr) (arena *heapArena, pageIdx uintptr, pageMask uint8)
	
		heapStatsAggregate represents memory stats obtained from the
		runtime. This set of stats is grouped together because they
		depend on each other in some way to make sense of the runtime's
		current heap memory use. They're also sharded across Ps, so it
		makes sense to grab them all at once.
		
			
			heapStatsDelta heapStatsDelta
			
				Memory stats.
				// byte delta of memory committed
			
				// byte delta of memory placed in the heap
			
				// byte delta of memory reserved for unrolled GC prog bits
			
				// byte delta of memory reserved for stacks
			
				// byte delta of memory reserved for work bufs
			
				// bytes allocated for large objects
			
				// number of large object allocations
			
				// bytes freed for large objects (>maxSmallSize)
			
				// number of frees for large objects (>maxSmallSize)
			
				// byte delta of released memory generated
			
				// number of allocs for small objects
			
				// number of frees for small objects (<=maxSmallSize)
			
				Allocator stats.
				
				These are all uint64 because they're cumulative, and could quickly wrap
				around otherwise.
				// number of tiny allocations
			
				inObjects is the bytes of memory occupied by objects,
			
				numObjects is the number of live objects in the heap.
			
				totalAllocated is the total bytes of heap objects allocated
				over the lifetime of the program.
			
				totalAllocs is the number of heap objects allocated over
				the lifetime of the program.
			
				totalFreed is the total bytes of heap objects freed
				over the lifetime of the program.
			
				totalFrees is the number of heap objects freed over
				the lifetime of the program.
		
			
			
				compute populates the heapStatsAggregate with values from the runtime.
			
				merge adds in the deltas from b into a.
	
		heapStatsDelta contains deltas of various runtime memory statistics
		that need to be updated together in order for them to be kept
		consistent with one another.
		
			
			
				Memory stats.
				// byte delta of memory committed
			
				// byte delta of memory placed in the heap
			
				// byte delta of memory reserved for unrolled GC prog bits
			
				// byte delta of memory reserved for stacks
			
				// byte delta of memory reserved for work bufs
			
				// bytes allocated for large objects
			
				// number of large object allocations
			
				// bytes freed for large objects (>maxSmallSize)
			
				// number of frees for large objects (>maxSmallSize)
			
				// byte delta of released memory generated
			
				// number of allocs for small objects
			
				// number of frees for small objects (<=maxSmallSize)
			
				Allocator stats.
				
				These are all uint64 because they're cumulative, and could quickly wrap
				around otherwise.
				// number of tiny allocations
		
			
			
				merge adds in the deltas from b into a.
	
		The compiler knows that a print of a value of this type
		should use printhex instead of printuint (decimal).
	
		
			
			data unsafe.Pointer
			tab *itab
		
			
			func printiface(i iface)
			func reflect_ifaceE2I(inter *interfacetype, e eface, dst *iface)
			func reflectlite_ifaceE2I(inter *interfacetype, e eface, dst *iface)
	
		An initTask represents the set of initializations that need to be done for a package.
		Keep in sync with ../../test/noinit.go:initTask
		
			
			nfns uint32
			
				// 0 = uninitialized, 1 = in progress, 2 = done
		
			
			func plugin_lastmoduleinit() (path string, syms map[string]any, initTasks []*initTask, errstr string)
		
			
			func doInit(ts []*initTask)
			func doInit1(t *initTask)
	
		inlinedCall is the encoding of entries in the FUNCDATA_InlTree table.
		
			
			
				// type of the called function
			
				// offset into pclntab for name of called function
			
				// position of an instruction whose source position is the call site (offset from entry)
			
				// line number of start of function (func keyword/TEXT directive)
	
		An inlineFrame is a position in an inlineUnwinder.
		
			
			
				index is the index of the current record in inlTree, or -1 if we are in
				the outermost function.
			
				pc is the PC giving the file/line metadata of the current frame. This is
				always a "call PC" (not a "return PC"). This is 0 when the iterator is
				exhausted.
		
			
			( inlineFrame) valid() bool
		
			
			func newInlineUnwinder(f funcInfo, pc uintptr) (inlineUnwinder, inlineFrame)
		
			
			func badSrcFunc(*inlineUnwinder, inlineFrame) srcFunc
	
		An inlineUnwinder iterates over the stack of inlined calls at a PC by
		decoding the inline table. The last step of iteration is always the frame of
		the physical function, so there's always at least one frame.
		
		This is typically used as:
		
			for u, uf := newInlineUnwinder(...); uf.valid(); uf = u.next(uf) { ... }
		
		Implementation note: This is used in contexts that disallow write barriers.
		Hence, the constructor returns this by value and pointer receiver methods
		must not mutate pointer fields. Also, we keep the mutable state in a separate
		struct mostly to keep both structs SSA-able, which generates much better
		code.
		
			
			f funcInfo
			inlTree *[1048576]inlinedCall
		
			
			
				fileLine returns the file name and line number of the call within the given
				frame. As a convenience, for the innermost frame, it returns the file and
				line of the PC this unwinder was started at (often this is a call to another
				physical function).
				
				It returns "?", 0 if something goes wrong.
			
				isInlined returns whether uf is an inlined frame.
			
				next returns the frame representing uf's logical caller.
			(*inlineUnwinder) resolveInternal(pc uintptr) inlineFrame
			
				srcFunc returns the srcFunc representing the given frame.
				
				srcFunc should be an internal detail,
				but widely used packages access it using linkname.
				Notable members of the hall of shame include:
				  - github.com/phuslu/log
				
				Do not remove or change the type signature.
				See go.dev/issue/67401.
				
				The go:linkname is below.
		
			
			func newInlineUnwinder(f funcInfo, pc uintptr) (inlineUnwinder, inlineFrame)
		
			
			func badSrcFunc(*inlineUnwinder, inlineFrame) srcFunc
	 type interfacetype = abi.InterfaceType (struct)	
		Note: change the formula in the mallocgc call in itabAdd if you change these fields.
		
			
			
				// current number of filled entries.
			
				// really [size] large
			
				// length of entries array. Always a power of 2.
		
			
			
				add adds the given itab to itab table t.
				itabLock must be held.
			
				find finds the given interface/type pair in t.
				Returns nil if the given interface/type pair isn't present.
		
			
			  var itabTable *itabTableType
			  var itabTableInit
	
		
			
			it_interval timespec
			it_value timespec
		
			
			func timer_settime(timerid int32, flags int32, new, old *itimerspec) int32
	
		Lock-free stack node.
		Also known to export_test.go.
		
			
			next uint64
			pushcnt uintptr
		
			
			func lfstackUnpack(val uint64) *lfnode
		
			
			func lfnodeValidate(node *lfnode)
			func lfstackPack(node *lfnode, cnt uintptr) uint64
	
		lfstack is the head of a lock-free stack.
		
		The zero value of lfstack is an empty list.
		
		This stack is intrusive. Nodes must embed lfnode as the first field.
		
		The stack does not keep GC-visible pointers to nodes, so the caller
		must ensure the nodes are allocated outside the Go heap.
		
			
			(*lfstack) empty() bool
			(*lfstack) pop() unsafe.Pointer
			(*lfstack) push(node *lfnode)
		
			
			  var gcBgMarkWorkerPool
	
		limiterEvent represents tracking state for an event tracked by the GC CPU limiter.
		
			
			
				// Stores a limiterEventStamp.
		
			
			
				consume acquires the partial event CPU time from any in-flight event.
				It achieves this by storing the current time as the new event time.
				
				Returns the type of the in-flight event, as well as how long it's currently been
				executing for. Returns limiterEventNone if no event is active.
			
				start begins tracking a new limiter event of the current type. If an event
				is already in flight, then a new event cannot begin because the current time is
				already being attributed to that event. In this case, this function returns false.
				Otherwise, it returns true.
				
				The caller must be non-preemptible until at least stop is called or this function
				returns false. Because this is trying to measure "on-CPU" time of some event, getting
				scheduled away during it can mean that whatever we're measuring isn't a reflection
				of "on-CPU" time. The OS could deschedule us at any time, but we want to maintain as
				close of an approximation as we can.
			
				stop stops the active limiter event. Throws if the
				
				The caller must be non-preemptible across the event. See start as to why.
	
		limiterEventStamp is a nanotime timestamp packed with a limiterEventType.
		
			
			
				duration computes the difference between now and the start time stored in the stamp.
				
				Returns 0 if the difference is negative, which may happen if now is stale or if the
				before and after timestamps cross a 2^(64-limiterEventBits) boundary.
			
				type extracts the event type from the stamp.
		
			
			func makeLimiterEventStamp(typ limiterEventType, now int64) limiterEventStamp
		
			
			const limiterEventStampNone
	
		limiterEventType indicates the type of an event occurring on some P.
		
		These events represent the full set of events that the GC CPU limiter tracks
		to execute its function.
		
		This type may use no more than limiterEventBits bits of information.
		
			
			func makeLimiterEventStamp(typ limiterEventType, now int64) limiterEventStamp
		
			
			const limiterEventIdle
			const limiterEventIdleMarkWork
			const limiterEventMarkAssist
			const limiterEventNone
			const limiterEventScavengeAssist
	
		linearAlloc is a simple linear allocator that pre-reserves a region
		of memory and then optionally maps that region into the Ready state
		as needed.
		
		The caller is responsible for locking.
		
			
			
				// end of reserved space
			
				// transition memory from Reserved to Ready if true
			
				// one byte past end of mapped space
			
				// next free byte
		
			
			(*linearAlloc) alloc(size, align uintptr, sysStat *sysMemStat) unsafe.Pointer
			(*linearAlloc) init(base, size uintptr, mapMemory bool)
	
		linknameIter is the it argument to mapiterinit and mapiternext.
		
		Callers of mapiterinit allocate their own iter structure, which has the
		layout of the pre-Go 1.24 hiter structure, shown here for posterity:
		
			type hiter struct {
				key         unsafe.Pointer
				elem        unsafe.Pointer
				t           *maptype
				h           *hmap
				buckets     unsafe.Pointer
				bptr        *bmap
				overflow    *[]*bmap
				oldoverflow *[]*bmap
				startBucket uintptr
				offset      uint8
				wrapped     bool
				B           uint8
				i           uint8
				bucket      uintptr
				checkBucket uintptr
			}
		
		Our structure must maintain compatibility with the old structure. This
		means:
		
		  - Our structure must be the same size or smaller than hiter. Otherwise we
		    may write outside the caller's hiter allocation.
		  - Our structure must have the same pointer layout as hiter, so that the GC
		    tracks pointers properly.
		
		Based on analysis of the "hall of shame" users of these linknames:
		
		  - The key and elem fields must be kept up to date with the current key/elem.
		    Some users directly access the key and elem fields rather than calling
		    reflect.mapiterkey/reflect.mapiterelem.
		  - The t field must be non-nil after mapiterinit. gonum.org/v1/gonum uses
		    this to verify the iterator is initialized.
		  - github.com/segmentio/encoding and github.com/RomiChan/protobuf check if h
		    is non-nil, but the code has no effect. Thus the value of h does not
		    matter. See internal/runtime_reflect/map.go.
		
			
			elem unsafe.Pointer
			
				The real iterator.
			
				Fields from hiter.
			typ *abi.SwissMapType
		
			
			func mapiterinit(t *abi.SwissMapType, m *maps.Map, it *linknameIter)
			func mapiternext(it *linknameIter)
			func reflect_mapiterelem(it *linknameIter) unsafe.Pointer
			func reflect_mapiterinit(t *abi.SwissMapType, m *maps.Map, it *linknameIter)
			func reflect_mapiterkey(it *linknameIter) unsafe.Pointer
			func reflect_mapiternext(it *linknameIter)
	
		
			
			
				// Must represent a user arena chunk.
			
				allocBits and gcmarkBits hold pointers to a span's mark and
				allocation bits. The pointers are 8 byte aligned.
				There are three arenas where this data is held.
				free: Dirty arenas that are no longer accessed
				      and can be reused.
				next: Holds information to be used in the next GC cycle.
				current: Information being used during this GC cycle.
				previous: Information being used during the last GC cycle.
				A new GC cycle starts with the call to finishsweep_m.
				finishsweep_m moves the previous arena to the free arena,
				the current arena to the previous arena, and
				the next arena to the current arena.
				The next arena is populated as the spans request
				memory to hold gcmarkBits for the next GC cycle as well
				as allocBits for newly allocated spans.
				
				The pointer arithmetic is done "by hand" instead of using
				arrays to avoid bounds checks along critical performance
				paths.
				The sweep will free the old allocBits and set allocBits to the
				gcmarkBits. The gcmarkBits are replaced with a fresh zeroed
				out memory.
			
				Cache of the allocBits at freeindex. allocCache is shifted
				such that the lowest bit corresponds to the bit freeindex.
				allocCache holds the complement of allocBits, thus allowing
				ctz (count trailing zero) to use it directly.
				allocCache may contain bits beyond s.nelems; the caller must ignore
				these.
			
				// number of allocated objects
			
				// a copy of allocCount that is stored just before this span is cached
			
				// for divide by elemsize
			
				// computed from sizeclass or from npages
			
				freeIndexForScan is like freeindex, except that freeindex is
				used by the allocator whereas freeIndexForScan is used by the
				GC scanner. They are two fields so that the GC sees the object
				is allocated only when the object and the heap bits are
				initialized (see also the assignment of freeIndexForScan in
				mallocgc, and issue 54596).
			
				freeindex is the slot index between 0 and nelems at which to begin scanning
				for the next free object in this span.
				Each allocation scans allocBits starting at freeindex until it encounters a 0
				indicating a free object. freeindex is then adjusted so that subsequent scans begin
				just past the newly discovered free object.
				
				If freeindex == nelem, this span has no free objects.
				
				allocBits is a bitmap of objects in this span.
				If n >= freeindex and allocBits[n/8] & (1<<(n%8)) is 0
				then object n is free;
				otherwise, object n is allocated. Bits starting at nelem are
				undefined and should never be referenced.
				
				Object n starts at address n*elemsize + (start << pageShift).
			mspan.gcmarkBits *gcBits
			
				// whether or not this span represents a user arena
			
				// malloc header for large objects.
			
				// end of data in span
			
				// For debugging.
			
				// list of free objects in mSpanManual spans
			
				// needs to be zeroed before allocation
			
				TODO: Look up nelems from sizeclass and remove this field if it
				helps performance.
				// number of object in the span.
			
				// next span in list, or nil if none
			
				// number of pages in span
			
				// bitmap for pinned objects; accessed atomically
			
				// previous span in list, or nil if none
			
				// size class and noscan (uint8)
			
				// guards specials list and changes to pinnerBits
			
				// linked list of special records sorted by offset.
			
				// address of first byte of span aka s.base()
			
				// mSpanInUse etc; accessed atomically (get/set methods)
			mspan.sweepgen uint32
			
				// interval for managing chunk allocation
			
				Reference to mspan.base() to keep the chunk alive.
		
			
			( liveUserArenaChunk) allocBitsForIndex(allocBitIndex uintptr) markBits
			( liveUserArenaChunk) base() uintptr
			
				countAlloc returns the number of objects allocated in span s by
				scanning the mark bitmap.
			
				decPinCounter decreases the counter. If the counter reaches 0, the counter
				special is deleted and false is returned. Otherwise true is returned.
			
				divideByElemSize returns n/s.elemsize.
				n must be within [0, s.npages*_PageSize),
				or may be exactly s.npages*_PageSize
				if s.elemsize is from sizeclasses.go.
				
				nosplit, because it is called by objIndex, which is nosplit
			
				Returns only when span s has been swept.
			
				nosplit, because it's called by isPinned, which is nosplit
			
				heapBits returns the heap ptr/scalar bits stored at the end of the span for
				small object spans and heap arena spans.
				
				Note that the uintptr of each element means something different for small object
				spans and for heap arena spans. Small object spans are easy: they're never interpreted
				as anything but uintptr, so they're immune to differences in endianness. However, the
				heapBits for user arena spans is exposed through a dummy type descriptor, so the byte
				ordering needs to match the same byte ordering the compiler would emit. The compiler always
				emits the bitmap data in little endian byte ordering, so on big endian platforms these
				uintptrs will have their byte orders swapped from what they normally would be.
				
				heapBitsInSpan(span.elemsize) or span.isUserArenaChunk must be true.
			
				heapBitsSmallForAddr loads the heap bits for the object stored at addr from span.heapBits.
				
				addr must be the base pointer of an object in the span. heapBitsInSpan(span.elemsize)
				must be true.
			( liveUserArenaChunk) inList() bool
			
				incPinCounter is only called for multiple pins of the same object and records
				the _additional_ pins.
			
				Initialize a new span with the given start and npages.
			
				initHeapBits initializes the heap bitmap for a span.
			
				isFree reports whether the index'th object in s is unallocated.
				
				The caller must ensure s.state is mSpanInUse, and there must have
				been no preemption points since ensuring this (which could allow a
				GC transition, which would allow the state to change).
			
				isUnusedUserArenaChunk indicates that the arena chunk has been set to fault
				and doesn't contain any scannable memory anymore. However, it might still be
				mSpanInUse as it sits on the quarantine list, since it needs to be swept.
				
				This is not safe to execute unless the caller has ownership of the mspan or
				the world is stopped (preemption is prevented while the relevant state changes).
				
				This is really only meant to be used by accounting tests in the runtime to
				distinguish when a span shouldn't be counted (since mSpanInUse might not be
				enough).
			( liveUserArenaChunk) layout() (size, n, total uintptr)
			( liveUserArenaChunk) markBitsForBase() markBits
			( liveUserArenaChunk) markBitsForIndex(objIndex uintptr) markBits
			
				newPinnerBits returns a pointer to 8 byte aligned bytes to be used for this
				span's pinner bits. newPinnerBits is used to mark objects that are pinned.
				They are copied when the span is swept.
			
				nextFreeIndex returns the index of the next free object in s at
				or after s.freeindex.
				There are hardware instructions that can be used to make this
				faster if profiling warrants it.
			
				objBase returns the base pointer for the object containing addr in span.
				
				Assumes that addr points into a valid part of span (span.base() <= addr < span.limit).
			
				nosplit, because it is called by other nosplit code like findObject
			( liveUserArenaChunk) pinnerBitSize() uintptr
			
				refillAllocCache takes 8 bytes s.allocBits starting at whichByte
				and negates them so that ctz (count trailing zeros) instructions
				can be used. It then places these 8 bytes into the cached 64 bit
				s.allocCache.
			
				refreshPinnerBits replaces pinnerBits with a fresh copy in the arenas for the
				next GC cycle. If it does not contain any pinned objects, pinnerBits of the
				span is set to nil.
			
				reportZombies reports any marked but free objects in s and throws.
				
				This generally means one of the following:
				
				1. User code converted a pointer to a uintptr and then back
				unsafely, and a GC ran while the uintptr was the only reference to
				an object.
				
				2. User code (or a compiler bug) constructed a bad pointer that
				points to a free slot, often a past-the-end pointer.
				
				3. The GC two cycles ago missed a pointer and freed a live object,
				but it was still live in the last cycle, so this GC cycle found a
				pointer to that object and marked it.
			( liveUserArenaChunk) setPinnerBits(p *pinnerBits)
			
				setUserArenaChunkToFault sets the address space for the user arena chunk to fault
				and releases any underlying memory resources.
				
				Must be in a non-preemptible state to ensure the consistency of statistics
				exported to MemStats.
			
				Find a splice point in the sorted list and check for an already existing
				record. Returns a pointer to the next-reference in the list predecessor.
				Returns true, if the referenced item is an exact match.
			
				typePointersOf returns an iterator over all heap pointers in the range [addr, addr+size).
				
				addr and addr+size must be in the range [span.base(), span.limit).
				
				Note: addr+size must be passed as the limit argument to the iterator's next method on
				each iteration. This slightly awkward API is to allow typePointers to be destructured
				by the compiler.
				
				nosplit because it is used during write barriers and must not be preempted.
			
				typePointersOfType is like typePointersOf, but assumes addr points to one or more
				contiguous instances of the provided type. The provided type must not be nil.
				
				It returns an iterator that tiles typ's gcmask starting from addr. It's the caller's
				responsibility to limit iteration.
				
				nosplit because its callers are nosplit and require all their callees to be nosplit.
			
				typePointersOfUnchecked is like typePointersOf, but assumes addr is the base
				of an allocation slot in a span (the start of the object if no header, the
				header otherwise). It returns an iterator that generates all pointers
				in the range [addr, addr+span.elemsize).
				
				nosplit because it is used during write barriers and must not be preempted.
			
				userArenaNextFree reserves space in the user arena for an item of the specified
				type. If cap is not -1, this is for an array of cap elements of type t.
			
				writeHeapBitsSmall writes the heap bits for small objects whose ptr/scalar data is
				stored as a bitmap at the end of the span.
				
				Assumes dataSize is <= ptrBits*goarch.PtrSize. x must be a pointer into the span.
				heapBitsInSpan(dataSize) must be true. dataSize must be >= typ.Size_.
			( liveUserArenaChunk) writeUserArenaHeapBits(addr uintptr) (h writeUserArenaHeapBits)
	
		
			( lockRank) String() string
		
			 lockRank : expvar.Var
			 lockRank : fmt.Stringer
			
			 lockRank : stringer
			 lockRank : context.stringer
		
			
			func getLockRank(l *mutex) lockRank
		
			
			func acquireLockRankAndM(rank lockRank)
			func assertRankHeld(r lockRank)
			func lockInit(l *mutex, rank lockRank)
			func lockWithRank(l *mutex, rank lockRank)
			func lockWithRankMayAcquire(l *mutex, rank lockRank)
			func releaseLockRankAndM(rank lockRank)
		
			
			const lockRankAllg
			const lockRankAllocmR
			const lockRankAllocmRInternal
			const lockRankAllocmW
			const lockRankAllp
			const lockRankAssistQueue
			const lockRankCpuprof
			const lockRankDeadlock
			const lockRankDefer
			const lockRankExecR
			const lockRankExecRInternal
			const lockRankExecW
			const lockRankFin
			const lockRankForcegc
			const lockRankGcBitsArenas
			const lockRankGlobalAlloc
			const lockRankGscan
			const lockRankHchan
			const lockRankHchanLeaf
			const lockRankItab
			const lockRankLeafRank
			const lockRankMheap
			const lockRankMheapSpecial
			const lockRankMspanSpecial
			const lockRankNetpollInit
			const lockRankNotifyList
			const lockRankPanic
			const lockRankPollCache
			const lockRankPollDesc
			const lockRankProfBlock
			const lockRankProfInsert
			const lockRankProfMemActive
			const lockRankProfMemFuture
			const lockRankRaceFini
			const lockRankReflectOffs
			const lockRankRoot
			const lockRankScavenge
			const lockRankSched
			const lockRankSpanSetSpine
			const lockRankStackLarge
			const lockRankStackpool
			const lockRankStrongFromWeakQueue
			const lockRankSudog
			const lockRankSweep
			const lockRankSweepWaiters
			const lockRankSynctest
			const lockRankSysmon
			const lockRankTestR
			const lockRankTestRInternal
			const lockRankTestW
			const lockRankTimer
			const lockRankTimers
			const lockRankTimerSend
			const lockRankTrace
			const lockRankTraceBuf
			const lockRankTraceStackTab
			const lockRankTraceStrings
			const lockRankTraceTypeTab
			const lockRankUnknown
			const lockRankUserArenaState
			const lockRankWakeableSleep
			const lockRankWbufSpans
	
		// lockRankStruct is embedded in mutex, but is empty when staticklockranking is
		disabled (the default)
	
		lockTimer assists with profiling contention on runtime-internal locks.
		
		There are several steps between the time that an M experiences contention and
		when that contention may be added to the profile. This comes from our
		constraints: We need to keep the critical section of each lock small,
		especially when those locks are contended. The reporting code cannot acquire
		new locks until the M has released all other locks, which means no memory
		allocations and encourages use of (temporary) M-local storage.
		
		The M will have space for storing one call stack that caused contention, and
		for the magnitude of that contention. It will also have space to store the
		magnitude of additional contention the M caused, since it only has space to
		remember one call stack and might encounter several contention events before
		it releases all of its locks and is thus able to transfer the local buffer
		into the profile.
		
		The M will collect the call stack when it unlocks the contended lock. That
		minimizes the impact on the critical section of the contended lock, and
		matches the mutex profile's behavior for contention in sync.Mutex: measured
		at the Unlock method.
		
		The profile for contention on sync.Mutex blames the caller of Unlock for the
		amount of contention experienced by the callers of Lock which had to wait.
		When there are several critical sections, this allows identifying which of
		them is responsible.
		
		Matching that behavior for runtime-internal locks will require identifying
		which Ms are blocked on the mutex. The semaphore-based implementation is
		ready to allow that, but the futex-based implementation will require a bit
		more work. Until then, we report contention on runtime-internal locks with a
		call stack taken from the unlock call (like the rest of the user-space
		"mutex" profile), but assign it a duration value based on how long the
		previous lock call took (like the user-space "block" profile).
		
		Thus, reporting the call stacks of runtime-internal lock contention is
		guarded by GODEBUG for now. Set GODEBUG=runtimecontentionstacks=1 to enable.
		
		TODO(rhysh): plumb through the delay duration, remove GODEBUG, update comment
		
		The M will track this by storing a pointer to the lock; lock/unlock pairs for
		runtime-internal locks are always on the same M.
		
		Together, that demands several steps for recording contention. First, when
		finally acquiring a contended lock, the M decides whether it should plan to
		profile that event by storing a pointer to the lock in its "to be profiled
		upon unlock" field. If that field is already set, it uses the relative
		magnitudes to weight a random choice between itself and the other lock, with
		the loser's time being added to the "additional contention" field. Otherwise
		if the M's call stack buffer is occupied, it does the comparison against that
		sample's magnitude.
		
		Second, having unlocked a mutex the M checks to see if it should capture the
		call stack into its local buffer. Finally, when the M unlocks its last mutex,
		it transfers the local buffer into the profile. As part of that step, it also
		transfers any "additional contention" time to the profile. Any lock
		contention that it experiences while adding samples to the profile will be
		recorded later as "additional contention" and not include a call stack, to
		avoid an echo.
		
			
			lock *mutex
			tickStart int64
			timeRate int64
			timeStart int64
		
			
			(*lockTimer) begin()
			(*lockTimer) end()
	
		
			
			
				// on allm
			
				// m is blocked on a note
			
				// goroutine running during fatal signal
			
				// cgo traceback if crashing in cgo call
			
				// if non-zero, cgoCallers in use temporarily
			chacha8 chacha8rand.State
			cheaprand uint64
			
				// stack that created this thread, it's used for StackRecord.Stack0, so it must align with it.
			
				// current running goroutine
			
				// div/mod denominator for arm - known to liblink
			dlogPerM dlogPerM
			dying int32
			
				// Whether it is safe to free g0 and delete m (one of freeMRef, freeMStack, freeMWait)
			
				// on sched.freem
			
				// goroutine with scheduling stack
			
				// whether the g0 stack has accurate bounds
			
				// Go-allocated signal handling stack
			
				// signal-handling g
			id int64
			
				// m is executing a cgo call
			
				// m is an extra m that does not have any Go frames
			
				// m is an extra m in a signal handler
			
				// m is an extra m
			
				these are here because they are too large to be on the stack
				of low-level NOSPLIT functions.
			libcallg guintptr
			
				// for cpu profiler
			libcallsp uintptr
			
				// tracking for external LockOSThread
			
				// tracking for internal lockOSThread
			lockedg guintptr
			locks int32
			locksHeld [10]heldLockInfo
			
				Up to 10 locks held by this m, maintained by the lock ranking code.
			
				// fields relating to runtime.lock contention
			mOS mOS
			mallocing int32
			
				// gobuf arg to morestack
			
				needPerThreadSyscall indicates that a per-thread syscall is required
				for doAllThreadsSyscall.
			
				profileTimer holds the ID of the POSIX interval timer for profiling CPU
				usage on this thread.
				
				It is valid when the profileTimerValid field is true. A thread
				creates and manages its own timer, and these fields are read and written
				only by this thread. But because some of the reads on profileTimerValid
				are in signal handling code, this field should be atomic type.
			mOS.profileTimerValid atomic.Bool
			
				This is a pointer to a chunk of memory allocated with a special
				mmap invocation in vgetrandomGetState().
			
				// semaphore for parking on locks
			
				// list of runtime lock waiters
			mstartfn func()
			
				// number of cgo calls currently in progress
			
				// number of cgo calls in total
			needextram bool
			
				// minit on C thread called sigaltstack
			nextp puintptr
			
				// the p that was attached before executing a syscall
			
				// attached p for executing go code (nil if not executing go code)
			park note
			
				pcvalue lookup cache
			
				preemptGen counts the number of completed preemption
				signals. This is used to detect when a preemption is
				requested, but fails.
			
				// if != "", keep curg running on this m
			printlock int8
			
				Fields not known to debuggers.
				// for debuggers, but offset not hard-coded
			
				// used for memory/block/mutex stack traces
			profilehz int32
			schedlink muintptr
			
				// storage for saved signal mask
			
				Whether this is a pending preemption signal on this M.
			
				// m is out of work and is actively looking for work
			syscalltick uint32
			throwing throwType
			
				// thread-local storage (for x86 extern register)
			trace mTraceState
			traceback uint8
			
				// PC for traceback while in VDSO call
			
				// SP for traceback while in VDSO call (0 if not in call)
			waitTraceBlockReason traceBlockReason
			waitTraceSkip int
			waitlock unsafe.Pointer
			
				wait* are used to carry arguments from gopark into park_m, because
				there's no stack to put them on. That is their sole purpose.
			
				// stores syscall parameters on windows
		
			
			(*m) becomeSpinning()
			(*m) hasCgoOnStack() bool
		
			
			func acquirem() *m
			func allocm(pp *p, fn func(), id int64) *m
			func gcParkStrongFromWeak() *m
			func getExtraM() (mp *m, last bool)
			func lockextra(nilokay bool) *m
			func mget() *m
		
			
			func addExtraM(mp *m)
			func adjustSignalStack(sig uint32, mp *m, gsigStack *gsignalStack) bool
			func adjustSignalStack2(sig uint32, sp uintptr, mp *m, ssDisable bool)
			func callbackUpdateSystemStack(mp *m, sp uintptr, signal bool)
			func canPreemptM(mp *m) bool
			func fatalsignal(sig uint32, c *sigctxt, gp *g, mp *m) *g
			func getMCache(mp *m) *mcache
			func mcommoninit(mp *m, id int64)
			func mdestroy(mp *m)
			func mpreinit(mp *m)
			func mProf_Malloc(mp *m, p unsafe.Pointer, size uintptr)
			func mProfStackInit(mp *m)
			func mput(mp *m)
			func mrandinit(mp *m)
			func newm1(mp *m)
			func newosproc(mp *m)
			func osPreemptExtEnter(mp *m)
			func osPreemptExtExit(mp *m)
			func osSetupTLS(mp *m)
			func preemptM(mp *m)
			func profilealloc(mp *m, x unsafe.Pointer, size uintptr)
			func putExtraM(mp *m)
			func releasem(mp *m)
			func semacreate(mp *m)
			func semawakeup(mp *m)
			func setMNoWB(mp **m, new *m)
			func setMNoWB(mp **m, new *m)
			func signalM(mp *m, sig int)
			func sigNotOnStack(sig uint32, sp uintptr, mp *m)
			func sigprof(pc, sp, lr uintptr, gp *g, mp *m)
			func traceCPUSample(gp *g, mp *m, pp *p, stk []uintptr)
			func traceThreadDestroy(mp *m)
			func unlockextra(mp *m, delta int32)
			func validSIGPROF(mp *m, c *sigctxt) bool
			func vgetrandomDestroy(mp *m)
		
			
			  var allm *m
			  var m0
	 type maptype = abi.SwissMapType (struct)	
		markBits provides access to the mark bit for an object in the heap.
		bytep points to the byte holding the mark bit.
		mask is a byte with a single bit set that can be &ed with *bytep
		to see if the bit has been set.
		*m.byte&m.mask != 0 indicates the mark bit is set.
		index can be used along with span information to generate
		the address of the object in the heap.
		We maintain one set of mark bits for allocation and one for
		marking purposes.
		
			
			bytep *uint8
			index uintptr
			mask uint8
		
			
			
				advance advances the markBits to the next object in the span.
			
				clearMarked clears the marked bit in the markbits, atomically.
			
				isMarked reports whether mark bit m is set.
			
				setMarked sets the marked bit in the markbits, atomically.
			
				setMarkedNonAtomic sets the marked bit in the markbits, non-atomically.
		
			
			func markBitsForAddr(p uintptr) markBits
			func markBitsForSpan(base uintptr) (mbits markBits)
		
			
			func setCheckmark(obj, base, off uintptr, mbits markBits) bool
	
		Per-thread (in Go, per-P) cache for small objects.
		This includes a small object cache and local allocation stats.
		No locking needed because it is per-thread (per-P).
		
		mcaches are allocated from non-GC'd memory, so any heap pointers
		must be specially handled.
		
			
			
				// spans to allocate from, indexed by spanClass
			
				flushGen indicates the sweepgen during which this mcache
				was last flushed. If flushGen != mheap_.sweepgen, the spans
				in this mcache are stale and need to the flushed so they
				can be swept. This is done in acquirep.
			
				// cached mem profile rate, used to detect changes
			
				The following members are accessed on every malloc,
				so they are grouped here for better caching.
				// trigger heap sample after allocating this many bytes
			
				// bytes of scannable heap allocated
			stackcache [4]stackfreelist
			
				tiny points to the beginning of the current tiny block, or
				nil if there is no current tiny block.
				
				tiny is a heap pointer. Since mcache is in non-GC'd memory,
				we handle it by clearing it in releaseAll during mark
				termination.
				
				tinyAllocs is the number of tiny allocations performed
				by the P that owns this mcache.
			tinyAllocs uintptr
			tinyoffset uintptr
		
			
			
				allocLarge allocates a span for a large object.
			
				nextFree returns the next free object from the cached span if one is available.
				Otherwise it refills the cache with a span with an available object and
				returns that object along with a flag indicating that this was a heavy
				weight allocation. If it is a heavy weight allocation the caller must
				determine whether a new GC cycle needs to be started or if the GC is active
				whether this goroutine needs to assist the GC.
				
				Must run in a non-preemptible context since otherwise the owner of
				c could change.
			
				prepareForSweep flushes c if the system has entered a new sweep phase
				since c was populated. This must happen between the sweep phase
				starting and the first allocation from c.
			
				refill acquires a new span of span class spc for c. This span will
				have at least one free object. The current span in c must be full.
				
				Must run in a non-preemptible context since otherwise the owner of
				c could change.
			(*mcache) releaseAll()
		
			
			func allocmcache() *mcache
			func getMCache(mp *m) *mcache
		
			
			func freemcache(c *mcache)
			func stackcache_clear(c *mcache)
			func stackcacherefill(c *mcache, order uint8)
			func stackcacherelease(c *mcache, order uint8)
		
			
			  var mcache0 *mcache
	
		Central list of free objects of a given size.
		
			
			
				// list of spans with no free objects
			
				partial and full contain two mspan sets: one of swept in-use
				spans, and one of unswept in-use spans. These two trade
				roles on each GC cycle. The unswept set is drained either by
				allocation or by the background sweeper in every GC cycle,
				so only two roles are necessary.
				
				sweepgen is increased by 2 on each GC cycle, so the swept
				spans are in partial[sweepgen/2%2] and the unswept spans are in
				partial[1-sweepgen/2%2]. Sweeping pops spans from the
				unswept set and pushes spans that are still in-use on the
				swept set. Likewise, allocating an in-use span pushes it
				on the swept set.
				
				Some parts of the sweeper can sweep arbitrary spans, and hence
				can't remove them from the unswept set, but will add the span
				to the appropriate swept list. As a result, the parts of the
				sweeper and mcentral that do consume from the unswept list may
				encounter swept spans, and these should be ignored.
				// list of spans with a free object
			spanclass spanClass
		
			
			
				Allocate a span to use in an mcache.
			
				fullSwept returns the spanSet which holds swept spans without any
				free slots for this sweepgen.
			
				fullUnswept returns the spanSet which holds unswept spans without any
				free slots for this sweepgen.
			
				grow allocates a new empty span from the heap and initializes it for c's size class.
			
				Initialize a single central free list.
			
				partialSwept returns the spanSet which holds partially-filled
				swept spans for this sweepgen.
			
				partialUnswept returns the spanSet which holds partially-filled
				unswept spans for this sweepgen.
			
				Return span from an mcache.
				
				s must have a span class corresponding to this
				mcentral and it must not be empty.
	
		A memRecord is the bucket data for a bucket of type memProfile,
		part of the memory profile.
		
			
			
				active is the currently published profile. A profiling
				cycle can be accumulated into active once its complete.
			
				future records the profile events we're counting for cycles
				that have not yet been published. This is ring buffer
				indexed by the global heap profile cycle C and stores
				cycles C, C+1, and C+2. Unlike active, these counts are
				only for a single cycle; they are not cumulative across
				cycles.
				
				We store cycle C here because there's a window between when
				C becomes the active cycle and when we've flushed it to
				active.
	
		memRecordCycle
		
			
			alloc_bytes uintptr
			allocs uintptr
			free_bytes uintptr
			frees uintptr
		
			
			
				add accumulates b into a. It does not zero b.
	
		
			
			
				compute is a function that populates a metricValue
				given a populated statAggregate structure.
			
				deps is the set of runtime statistics that this metric
				depends on. Before compute is called, the statAggregate
				which will be passed must ensure() these dependencies.
	
		metricFloat64Histogram is a runtime copy of runtime/metrics.Float64Histogram
		and must be kept structurally identical to that type.
		
			
			buckets []float64
			counts []uint64
	
		metricKind is a runtime copy of runtime/metrics.ValueKind and
		must be kept structurally identical to that type.
		
			
			const metricKindBad
			const metricKindFloat64
			const metricKindFloat64Histogram
			const metricKindUint64
	
		
			
			( metricReader) compute(_ *statAggregate, out *metricValue)
	
		metricSample is a runtime copy of runtime/metrics.Sample and
		must be kept structurally identical to that type.
		
			
			name string
			value metricValue
	
		metricValue is a runtime copy of runtime/metrics.Sample and
		must be kept structurally identical to that type.
		
			
			kind metricKind
			
				// contains non-scalar values.
			
				// contains scalar values for scalar Kinds.
		
			
			
				float64HistOrInit tries to pull out an existing float64Histogram
				from the value, but if none exists, then it allocates one with
				the given buckets.
		
			
			func compute0(_ *statAggregate, out *metricValue)
	
		Main malloc heap.
		The heap itself is the "free" and "scav" treaps,
		but all the other global data is here too.
		
		mheap must not be heap-allocated because it contains mSpanLists,
		which must not be heap-allocated.
		
			
			
				allArenas is the arenaIndex of every mapped arena. This can
				be used to iterate through the address space.
				
				Access is protected by mheap_.lock. However, since this is
				append-only and old backing arrays are never freed, it is
				safe to acquire mheap_.lock, copy the slice header, and
				then release mheap_.lock.
			
				allspans is a slice of all mspans ever created. Each mspan
				appears exactly once.
				
				The memory for allspans is manually managed and can be
				reallocated and move as the heap grows.
				
				In general, allspans is protected by mheap_.lock, which
				prevents concurrent access as well as freeing the backing
				store. Accesses during STW might not hold the lock, but
				must ensure that allocation cannot happen around the
				access (since that may free the backing store).
				// all spans out there
			
				arena is a pre-reserved space for allocating heap arenas
				(the actual arenas). This is only used on 32-bit.
			
				// allocator for arenaHints
			
				arenaHints is a list of addresses at which to attempt to
				add more heap arenas. This is initially populated with a
				set of general hint addresses, and grown with the bounds of
				actual heap arena ranges.
			
				arenas is the heap arena map. It points to the metadata for
				the heap for every arena frame of the entire usable virtual
				address space.
				
				Use arenaIndex to compute indexes into this array.
				
				For regions of the address space that are not backed by the
				Go heap, the arena map contains nil.
				
				Modifications are protected by mheap_.lock. Reads can be
				performed without locking; however, a given entry can
				transition from nil to non-nil at any time when the lock
				isn't held. (Entries never transitions back to nil.)
				
				In general, this is a two-level mapping consisting of an L1
				map and possibly many L2 maps. This saves space when there
				are a huge number of arena frames. However, on many
				platforms (even 64-bit), arenaL1Bits is 0, making this
				effectively a single-level map. In this case, arenas[0]
				will never be nil.
			
				arenasHugePages indicates whether arenas' L2 entries are eligible
				to be backed by huge pages.
			
				// allocator for mcache*
			
				central free lists for small size classes.
				the padding makes sure that the mcentrals are
				spaced CacheLinePadSize bytes apart, so that each mcentral.lock
				gets its own cache line.
				central is indexed by spanClass.
			
				cleanupID is a counter which is incremented each time a cleanup special is added
				to a span. It's used to create globally unique identifiers for individual cleanup.
				cleanupID is protected by mheap_.lock. It should only be incremented while holding
				the lock.
			
				curArena is the arena that the heap is currently growing
				into. This should always be physPageSize-aligned.
			
				heapArenaAlloc is pre-reserved space for allocating heapArena
				objects. This is only used on 32-bit, where we pre-reserve
				this space to avoid interleaving it with the heap itself.
			
				lock must only be acquired on the system stack, otherwise a g
				could self-deadlock if its stack grows with the lock held.
			
				markArenas is a snapshot of allArenas taken at the beginning
				of the mark cycle. Because allArenas is append-only, neither
				this slice nor its contents will change during the mark, so
				it can be read safely.
			
				// page allocation data structure
			
				Proportional sweep
				
				These parameters represent a linear function from gcController.heapLive
				to page sweep count. The proportional sweep system works to
				stay in the black by keeping the current page sweep count
				above this line at the current gcController.heapLive.
				
				The line has slope sweepPagesPerByte and passes through a
				basis point at (sweepHeapLiveBasis, pagesSweptBasis). At
				any given time, the system is at (gcController.heapLive,
				pagesSwept) in this space.
				
				It is important that the line pass through a point we
				control rather than simply starting at a 0,0 origin
				because that lets us adjust sweep pacing at any time while
				accounting for current progress. If we could only adjust
				the slope, it would create a discontinuity in debt if any
				progress has already been made.
				// pages of spans in stats mSpanInUse
			
				// pages swept this cycle
			
				// pagesSwept to use as the origin of the sweep ratio
			
				reclaimCredit is spare credit for extra pages swept. Since
				the page reclaimer works in large chunks, it may reclaim
				more than requested. Any spare pages released go to this
				credit pool.
			
				reclaimIndex is the page index in allArenas of next page to
				reclaim. Specifically, it refers to page (i %
				pagesPerArena) of arena allArenas[i / pagesPerArena].
				
				If this is >= 1<<63, the page reclaimer is done scanning
				the page marks.
			
				// allocator for span*
			
				// allocator for specialcleanup*
			
				// allocator for specialPinCounter
			
				// allocator for specialReachable
			
				// allocator for specialWeakHandle
			
				// allocator for specialfinalizer*
			
				// lock for special record allocators.
			
				// allocator for specialprofile*
			
				sweepArenas is a snapshot of allArenas taken at the
				beginning of the sweep cycle. This can be read safely by
				simply blocking GC (by disabling preemption).
			
				// value of gcController.heapLive to use as the origin of sweep ratio; written with lock, read without
			
				// proportional sweep ratio; written with lock, read without
			
				// sweep generation, see comment in mspan; written during STW
			
				// never set, just here to force the specialfinalizer type into DWARF
			
				User arena state.
				
				Protected by mheap_.lock.
		
			
			
				alloc allocates a new span of npage pages from the GC'd heap.
				
				spanclass indicates the span's size class and scannability.
				
				Returns a span that has been fully initialized. span.needzero indicates
				whether the span has been zeroed. Note that it may not be.
			
				allocMSpanLocked allocates an mspan object.
				
				h.lock must be held.
				
				allocMSpanLocked must be called on the system stack because
				its caller holds the heap lock. See mheap for details.
				Running on the system stack also ensures that we won't
				switch Ps during this function. See tryAllocMSpan for details.
			
				allocManual allocates a manually-managed span of npage pages.
				allocManual returns nil if allocation fails.
				
				allocManual adds the bytes used to *stat, which should be a
				memstats in-use field. Unlike allocations in the GC'd heap, the
				allocation does *not* count toward heapInUse.
				
				The memory backing the returned span may not be zeroed if
				span.needzero is set.
				
				allocManual must be called on the system stack because it may
				acquire the heap lock via allocSpan. See mheap for details.
				
				If new code is written to call allocManual, do NOT use an
				existing spanAllocType value and instead declare a new one.
			
				allocNeedsZero checks if the region of address space [base, base+npage*pageSize),
				assumed to be allocated, needs to be zeroed, updating heap arena metadata for
				future allocations.
				
				This must be called each time pages are allocated from the heap, even if the page
				allocator can otherwise prove the memory it's allocating is already zero because
				they're fresh from the operating system. It updates heapArena metadata that is
				critical for future page allocations.
				
				There are no locking constraints on this method.
			
				allocSpan allocates an mspan which owns npages worth of memory.
				
				If typ.manual() == false, allocSpan allocates a heap span of class spanclass
				and updates heap accounting. If manual == true, allocSpan allocates a
				manually-managed span (spanclass is ignored), and the caller is
				responsible for any accounting related to its use of the span. Either
				way, allocSpan will atomically add the bytes in the newly allocated
				span to *sysStat.
				
				The returned span is fully initialized.
				
				h.lock must not be held.
				
				allocSpan must be called on the system stack both because it acquires
				the heap lock and because it must block GC transitions.
			
				allocUserArenaChunk attempts to reuse a free user arena chunk represented
				as a span.
				
				Must be in a non-preemptible state to ensure the consistency of statistics
				exported to MemStats.
				
				Acquires the heap lock. Must run on the system stack for that reason.
			
				enableMetadataHugePages enables huge pages for various sources of heap metadata.
				
				A note on latency: for sufficiently small heaps (<10s of GiB) this function will take constant
				time, but may take time proportional to the size of the mapped heap beyond that.
				
				This function is idempotent.
				
				The heap lock must not be held over this operation, since it will briefly acquire
				the heap lock.
				
				Must be called on the system stack because it acquires the heap lock.
			
				freeMSpanLocked free an mspan object.
				
				h.lock must be held.
				
				freeMSpanLocked must be called on the system stack because
				its caller holds the heap lock. See mheap for details.
				Running on the system stack also ensures that we won't
				switch Ps during this function. See tryAllocMSpan for details.
			
				freeManual frees a manually-managed span returned by allocManual.
				typ must be the same as the spanAllocType passed to the allocManual that
				allocated s.
				
				This must only be called when gcphase == _GCoff. See mSpanState for
				an explanation.
				
				freeManual must be called on the system stack because it acquires
				the heap lock. See mheap for details.
			
				Free the span back into the heap.
			(*mheap) freeSpanLocked(s *mspan, typ spanAllocType)
			
				Try to add at least npage pages of memory to the heap,
				returning how much the heap grew by and whether it worked.
				
				h.lock must be held.
			
				Initialize the heap.
			
				initSpan initializes a blank span s which will represent the range
				[base, base+npages*pageSize). typ is the type of span being allocated.
			
				nextSpanForSweep finds and pops the next span for sweeping from the
				central sweep buffers. It returns ownership of the span to the caller.
				Returns nil if no such span exists.
			
				reclaim sweeps and reclaims at least npage pages into the heap.
				It is called before allocating npage pages to keep growth in check.
				
				reclaim implements the page-reclaimer half of the sweeper.
				
				h.lock must NOT be held.
			
				reclaimChunk sweeps unmarked spans that start at page indexes [pageIdx, pageIdx+n).
				It returns the number of pages returned to the heap.
				
				h.lock must be held and the caller must be non-preemptible. Note: h.lock may be
				temporarily unlocked and re-locked in order to do sweeping or if tracing is
				enabled.
			
				scavengeAll acquires the heap lock (blocking any additional
				manipulation of the page allocator) and iterates over the whole
				heap, scavenging every free page available.
				
				Must run on the system stack because it acquires the heap lock.
			
				setSpans modifies the span map so [spanOf(base), spanOf(base+npage*pageSize))
				is s.
			
				sysAlloc allocates heap arena space for at least n bytes. The
				returned pointer is always heapArenaBytes-aligned and backed by
				h.arenas metadata. The returned size is always a multiple of
				heapArenaBytes. sysAlloc returns nil on failure.
				There is no corresponding free function.
				
				hintList is a list of hint addresses for where to allocate new
				heap arenas. It must be non-nil.
				
				register indicates whether the heap arena should be registered
				in allArenas.
				
				sysAlloc returns a memory region in the Reserved state. This region must
				be transitioned to Prepared and then Ready before use.
				
				h must be locked.
			
				tryAllocMSpan attempts to allocate an mspan object from
				the P-local cache, but may fail.
				
				h.lock need not be held.
				
				This caller must ensure that its P won't change underneath
				it during this function. Currently to ensure that we enforce
				that the function is run on the system stack, because that's
				the only place it is used now. In the future, this requirement
				may be relaxed if its use is necessary elsewhere.
		
			
			  var mheap_
	
		A generic linked list of blocks.  (Typically the block is bigger than sizeof(MLink).)
		Since assignments to mlink.next will result in a write barrier being performed
		this cannot be used by some of the internal GC structures. For example when
		the sweeper is placing an unmarked object on the free list it does not want the
		write barrier to be called since that could result in the object being reachable.
		
			
			next *mlink
	
		
			
			
				// cycles attributable to "pending" (if set), otherwise to "stack"
			
				// contention for which we weren't able to record a call stack
			
				// attribute all time to "lost"
			
				// stack and cycles are to be added to the mutex profile
			
				// *mutex that experienced contention (to be traceback-ed)
			
				// stack that experienced contention in runtime.lockWithRank
			
				// total nanoseconds spent waiting in runtime.lockWithRank
		
			
			(*mLockProfile) captureStack()
			(*mLockProfile) recordLock(cycles int64, l *mutex)
			
				From unlock2, we might not be holding a p in this code.
			(*mLockProfile) store()
	
		moduledata records information about the layout of the executable
		image. It is written by the linker. Any changes here must be
		matched changes to the code in cmd/link/internal/ld/symtab.go:symtab.
		moduledata is stored in statically allocated non-pointer memory;
		none of the pointers here are visible to the garbage collector.
		
			
				// Only in static data
			
			
				// module failed to load and should be ignored
			bss uintptr
			covctrs uintptr
			cutab []uint32
			data uintptr
			ebss uintptr
			ecovctrs uintptr
			edata uintptr
			end uintptr
			enoptrbss uintptr
			enoptrdata uintptr
			etext uintptr
			etypes uintptr
			filetab []byte
			findfunctab uintptr
			ftab []functab
			funcnametab []byte
			gcbss uintptr
			gcbssmask bitvector
			gcdata uintptr
			gcdatamask bitvector
			
				// go.func.*
			
				// 1 if module contains the main function, 0 otherwise
			
				This slice records the initializing tasks that need to be
				done to start up the program. It is built by the linker.
			itablinks []*itab
			maxpc uintptr
			minpc uintptr
			modulehashes []modulehash
			modulename string
			next *moduledata
			noptrbss uintptr
			noptrdata uintptr
			pcHeader *pcHeader
			pclntable []byte
			pctab []byte
			pkghashes []modulehash
			pluginpath string
			ptab []ptabEntry
			rodata uintptr
			text uintptr
			textsectmap []textsect
			
				// offsets from types
			
				// offset to *_rtype in previous module
			types uintptr
		
			
			
				funcName returns the string at nameOff in the function name table.
			
				textAddr returns md.text + off, with special handling for multiple text sections.
				off is a (virtual) offset computed at internal linking time,
				before the external linker adjusts the sections' base addresses.
				
				The text, or instruction stream is generated as one large buffer.
				The off (offset) for a function is its offset within this buffer.
				If the total text size gets too large, there can be issues on platforms like ppc64
				if the target of calls are too far for the call instruction.
				To resolve the large text issue, the text is split into multiple text sections
				to allow the linker to generate long calls when necessary.
				When this happens, the vaddr for each text section is set to its offset within the text.
				Each function's offset is compared against the section vaddrs and ends to determine the containing section.
				Then the section relative offset is added to the section's
				relocated baseaddr to compute the function address.
				
				It is nosplit because it is part of the findfunc implementation.
			
				textOff is the opposite of textAddr. It converts a PC to a (virtual) offset
				to md.text, and returns if the PC is in any Go text section.
				
				It is nosplit because it is part of the findfunc implementation.
		
			
			func activeModules() []*moduledata
			func findmoduledatap(pc uintptr) *moduledata
		
			
			func moduledataverify1(datap *moduledata)
			func pluginftabverify(md *moduledata)
		
			
			  var firstmoduledata
			  var lastmoduledatap *moduledata
	
		A modulehash is used to compare the ABI of a new module or a
		package in a new module with the loaded program.
		
		For each shared library a module links against, the linker creates an entry in the
		moduledata.modulehashes slice containing the name of the module, the abi hash seen
		at link time and a pointer to the runtime abi hash. These are checked in
		moduledataverify1 below.
		
		For each loaded plugin, the pkghashes slice has a modulehash of the
		newly loaded package that can be used to check the plugin's version of
		a package against any previously loaded version of the package.
		This is done in plugin.lastmoduleinit.
		
			
			linktimehash string
			modulename string
			runtimehash *string
	
		
			
			
				needPerThreadSyscall indicates that a per-thread syscall is required
				for doAllThreadsSyscall.
			
				profileTimer holds the ID of the POSIX interval timer for profiling CPU
				usage on this thread.
				
				It is valid when the profileTimerValid field is true. A thread
				creates and manages its own timer, and these fields are read and written
				only by this thread. But because some of the reads on profileTimerValid
				are in signal handling code, this field should be atomic type.
			profileTimerValid atomic.Bool
			
				This is a pointer to a chunk of memory allocated with a special
				mmap invocation in vgetrandomGetState().
			
				// semaphore for parking on locks
	
		mProfCycleHolder holds the global heap profile cycle number (wrapped at
		mProfCycleWrap, stored starting at bit 1), and a flag (stored at bit 0) to
		indicate whether future[cycle] in all buckets has been queued to flush into
		the active profile.
		
			
			value atomic.Uint32
		
			
			
				increment increases the cycle count by one, wrapping the value at
				mProfCycleWrap. It clears the flushed flag.
			
				read returns the current cycle count.
			
				setFlushed sets the flushed flag. It returns the current cycle count and the
				previous value of the flushed flag.
		
			
			  var mProfCycle
	
		
			
			
				allocBits and gcmarkBits hold pointers to a span's mark and
				allocation bits. The pointers are 8 byte aligned.
				There are three arenas where this data is held.
				free: Dirty arenas that are no longer accessed
				      and can be reused.
				next: Holds information to be used in the next GC cycle.
				current: Information being used during this GC cycle.
				previous: Information being used during the last GC cycle.
				A new GC cycle starts with the call to finishsweep_m.
				finishsweep_m moves the previous arena to the free arena,
				the current arena to the previous arena, and
				the next arena to the current arena.
				The next arena is populated as the spans request
				memory to hold gcmarkBits for the next GC cycle as well
				as allocBits for newly allocated spans.
				
				The pointer arithmetic is done "by hand" instead of using
				arrays to avoid bounds checks along critical performance
				paths.
				The sweep will free the old allocBits and set allocBits to the
				gcmarkBits. The gcmarkBits are replaced with a fresh zeroed
				out memory.
			
				Cache of the allocBits at freeindex. allocCache is shifted
				such that the lowest bit corresponds to the bit freeindex.
				allocCache holds the complement of allocBits, thus allowing
				ctz (count trailing zero) to use it directly.
				allocCache may contain bits beyond s.nelems; the caller must ignore
				these.
			
				// number of allocated objects
			
				// a copy of allocCount that is stored just before this span is cached
			
				// for divide by elemsize
			
				// computed from sizeclass or from npages
			
				freeIndexForScan is like freeindex, except that freeindex is
				used by the allocator whereas freeIndexForScan is used by the
				GC scanner. They are two fields so that the GC sees the object
				is allocated only when the object and the heap bits are
				initialized (see also the assignment of freeIndexForScan in
				mallocgc, and issue 54596).
			
				freeindex is the slot index between 0 and nelems at which to begin scanning
				for the next free object in this span.
				Each allocation scans allocBits starting at freeindex until it encounters a 0
				indicating a free object. freeindex is then adjusted so that subsequent scans begin
				just past the newly discovered free object.
				
				If freeindex == nelem, this span has no free objects.
				
				allocBits is a bitmap of objects in this span.
				If n >= freeindex and allocBits[n/8] & (1<<(n%8)) is 0
				then object n is free;
				otherwise, object n is allocated. Bits starting at nelem are
				undefined and should never be referenced.
				
				Object n starts at address n*elemsize + (start << pageShift).
			gcmarkBits *gcBits
			
				// whether or not this span represents a user arena
			
				// malloc header for large objects.
			
				// end of data in span
			
				// For debugging.
			
				// list of free objects in mSpanManual spans
			
				// needs to be zeroed before allocation
			
				TODO: Look up nelems from sizeclass and remove this field if it
				helps performance.
				// number of object in the span.
			
				// next span in list, or nil if none
			
				// number of pages in span
			
				// bitmap for pinned objects; accessed atomically
			
				// previous span in list, or nil if none
			
				// size class and noscan (uint8)
			
				// guards specials list and changes to pinnerBits
			
				// linked list of special records sorted by offset.
			
				// address of first byte of span aka s.base()
			
				// mSpanInUse etc; accessed atomically (get/set methods)
			sweepgen uint32
			
				// interval for managing chunk allocation
		
			
			(*mspan) allocBitsForIndex(allocBitIndex uintptr) markBits
			(*mspan) base() uintptr
			
				countAlloc returns the number of objects allocated in span s by
				scanning the mark bitmap.
			
				decPinCounter decreases the counter. If the counter reaches 0, the counter
				special is deleted and false is returned. Otherwise true is returned.
			
				divideByElemSize returns n/s.elemsize.
				n must be within [0, s.npages*_PageSize),
				or may be exactly s.npages*_PageSize
				if s.elemsize is from sizeclasses.go.
				
				nosplit, because it is called by objIndex, which is nosplit
			
				Returns only when span s has been swept.
			
				nosplit, because it's called by isPinned, which is nosplit
			
				heapBits returns the heap ptr/scalar bits stored at the end of the span for
				small object spans and heap arena spans.
				
				Note that the uintptr of each element means something different for small object
				spans and for heap arena spans. Small object spans are easy: they're never interpreted
				as anything but uintptr, so they're immune to differences in endianness. However, the
				heapBits for user arena spans is exposed through a dummy type descriptor, so the byte
				ordering needs to match the same byte ordering the compiler would emit. The compiler always
				emits the bitmap data in little endian byte ordering, so on big endian platforms these
				uintptrs will have their byte orders swapped from what they normally would be.
				
				heapBitsInSpan(span.elemsize) or span.isUserArenaChunk must be true.
			
				heapBitsSmallForAddr loads the heap bits for the object stored at addr from span.heapBits.
				
				addr must be the base pointer of an object in the span. heapBitsInSpan(span.elemsize)
				must be true.
			(*mspan) inList() bool
			
				incPinCounter is only called for multiple pins of the same object and records
				the _additional_ pins.
			
				Initialize a new span with the given start and npages.
			
				initHeapBits initializes the heap bitmap for a span.
			
				isFree reports whether the index'th object in s is unallocated.
				
				The caller must ensure s.state is mSpanInUse, and there must have
				been no preemption points since ensuring this (which could allow a
				GC transition, which would allow the state to change).
			
				isUnusedUserArenaChunk indicates that the arena chunk has been set to fault
				and doesn't contain any scannable memory anymore. However, it might still be
				mSpanInUse as it sits on the quarantine list, since it needs to be swept.
				
				This is not safe to execute unless the caller has ownership of the mspan or
				the world is stopped (preemption is prevented while the relevant state changes).
				
				This is really only meant to be used by accounting tests in the runtime to
				distinguish when a span shouldn't be counted (since mSpanInUse might not be
				enough).
			(*mspan) layout() (size, n, total uintptr)
			(*mspan) markBitsForBase() markBits
			(*mspan) markBitsForIndex(objIndex uintptr) markBits
			
				newPinnerBits returns a pointer to 8 byte aligned bytes to be used for this
				span's pinner bits. newPinnerBits is used to mark objects that are pinned.
				They are copied when the span is swept.
			
				nextFreeIndex returns the index of the next free object in s at
				or after s.freeindex.
				There are hardware instructions that can be used to make this
				faster if profiling warrants it.
			
				objBase returns the base pointer for the object containing addr in span.
				
				Assumes that addr points into a valid part of span (span.base() <= addr < span.limit).
			
				nosplit, because it is called by other nosplit code like findObject
			(*mspan) pinnerBitSize() uintptr
			
				refillAllocCache takes 8 bytes s.allocBits starting at whichByte
				and negates them so that ctz (count trailing zeros) instructions
				can be used. It then places these 8 bytes into the cached 64 bit
				s.allocCache.
			
				refreshPinnerBits replaces pinnerBits with a fresh copy in the arenas for the
				next GC cycle. If it does not contain any pinned objects, pinnerBits of the
				span is set to nil.
			
				reportZombies reports any marked but free objects in s and throws.
				
				This generally means one of the following:
				
				1. User code converted a pointer to a uintptr and then back
				unsafely, and a GC ran while the uintptr was the only reference to
				an object.
				
				2. User code (or a compiler bug) constructed a bad pointer that
				points to a free slot, often a past-the-end pointer.
				
				3. The GC two cycles ago missed a pointer and freed a live object,
				but it was still live in the last cycle, so this GC cycle found a
				pointer to that object and marked it.
			(*mspan) setPinnerBits(p *pinnerBits)
			
				setUserArenaChunkToFault sets the address space for the user arena chunk to fault
				and releases any underlying memory resources.
				
				Must be in a non-preemptible state to ensure the consistency of statistics
				exported to MemStats.
			
				Find a splice point in the sorted list and check for an already existing
				record. Returns a pointer to the next-reference in the list predecessor.
				Returns true, if the referenced item is an exact match.
			
				typePointersOf returns an iterator over all heap pointers in the range [addr, addr+size).
				
				addr and addr+size must be in the range [span.base(), span.limit).
				
				Note: addr+size must be passed as the limit argument to the iterator's next method on
				each iteration. This slightly awkward API is to allow typePointers to be destructured
				by the compiler.
				
				nosplit because it is used during write barriers and must not be preempted.
			
				typePointersOfType is like typePointersOf, but assumes addr points to one or more
				contiguous instances of the provided type. The provided type must not be nil.
				
				It returns an iterator that tiles typ's gcmask starting from addr. It's the caller's
				responsibility to limit iteration.
				
				nosplit because its callers are nosplit and require all their callees to be nosplit.
			
				typePointersOfUnchecked is like typePointersOf, but assumes addr is the base
				of an allocation slot in a span (the start of the object if no header, the
				header otherwise). It returns an iterator that generates all pointers
				in the range [addr, addr+span.elemsize).
				
				nosplit because it is used during write barriers and must not be preempted.
			
				userArenaNextFree reserves space in the user arena for an item of the specified
				type. If cap is not -1, this is for an array of cap elements of type t.
			
				writeHeapBitsSmall writes the heap bits for small objects whose ptr/scalar data is
				stored as a bitmap at the end of the span.
				
				Assumes dataSize is <= ptrBits*goarch.PtrSize. x must be a pointer into the span.
				heapBitsInSpan(dataSize) must be true. dataSize must be >= typ.Size_.
			(*mspan) writeUserArenaHeapBits(addr uintptr) (h writeUserArenaHeapBits)
		
			
			func findObject(p, refBase, refOff uintptr) (base uintptr, s *mspan, objIndex uintptr)
			func newUserArenaChunk() (unsafe.Pointer, *mspan)
			func spanOf(p uintptr) *mspan
			func spanOfHeap(p uintptr) *mspan
			func spanOfUnchecked(p uintptr) *mspan
		
			
			func badPointer(s *mspan, p, refBase, refOff uintptr)
			func doubleCheckHeapPointers(x, dataSize uintptr, typ *_type, header **_type, span *mspan)
			func doubleCheckHeapPointersInterior(x, interior, size, dataSize uintptr, typ *_type, header **_type, span *mspan)
			func doubleCheckHeapType(x, dataSize uintptr, gctyp *_type, header **_type, span *mspan)
			func doubleCheckTypePointersOfType(s *mspan, typ *_type, addr, size uintptr)
			func freeUserArenaChunk(s *mspan, x unsafe.Pointer)
			func gcmarknewobject(span *mspan, obj uintptr)
			func greyobject(obj, base, off uintptr, span *mspan, gcw *gcWork, objIndex uintptr)
			func heapSetTypeLarge(x, dataSize uintptr, typ *_type, span *mspan) uintptr
			func heapSetTypeNoHeader(x, dataSize uintptr, typ *_type, span *mspan) uintptr
			func heapSetTypeSmallHeader(x, dataSize uintptr, typ *_type, header **_type, span *mspan) uintptr
			func newSpecialsIter(span *mspan) specialsIter
			func nextFreeFast(s *mspan) gclinkptr
			func osStackAlloc(s *mspan)
			func osStackFree(s *mspan)
			func spanHasNoSpecials(s *mspan)
			func spanHasSpecials(s *mspan)
			func traceSpanID(s *mspan) traceArg
			func traceSpanTypeAndClass(s *mspan) traceArg
			func userArenaHeapBitsSetSliceType(typ *_type, n int, ptr unsafe.Pointer, s *mspan)
			func userArenaHeapBitsSetType(typ *_type, ptr unsafe.Pointer, s *mspan)
		
			
			  var emptymspan
	
		mSpanList heads a linked list of spans.
		
			
			
				// first span in list, or nil if none
			
				// last span in list, or nil if none
		
			
			
				Initialize an empty doubly-linked list.
			(*mSpanList) insert(span *mspan)
			(*mSpanList) insertBack(span *mspan)
			(*mSpanList) isEmpty() bool
			(*mSpanList) remove(span *mspan)
			
				takeAll removes all spans from other and inserts them at the front
				of list.
	
		An mspan representing actual memory has state mSpanInUse,
		mSpanManual, or mSpanFree. Transitions between these states are
		constrained as follows:
		
		  - A span may transition from free to in-use or manual during any GC
		    phase.
		
		  - During sweeping (gcphase == _GCoff), a span may transition from
		    in-use to free (as a result of sweeping) or manual to free (as a
		    result of stacks being freed).
		
		  - During GC (gcphase != _GCoff), a span *must not* transition from
		    manual or in-use to free. Because concurrent GC may read a pointer
		    and then look up its span, the span state must be monotonic.
		
		Setting mspan.state to mSpanInUse or mSpanManual must be done
		atomically and only after all other span fields are valid.
		Likewise, if inspecting a span is contingent on it being
		mSpanInUse, the state should be loaded atomically and checked
		before depending on other fields. This allows the garbage collector
		to safely deal with potentially invalid pointers, since resolving
		such pointers may race with a span being allocated.
		
			
			const mSpanDead
			const mSpanInUse
			const mSpanManual
	
		mSpanStateBox holds an atomic.Uint8 to provide atomic operations on
		an mSpanState. This is a separate type to disallow accidental comparison
		or assignment with mSpanState.
		
			
			s atomic.Uint8
		
			
			(*mSpanStateBox) get() mSpanState
			(*mSpanStateBox) set(s mSpanState)
	
		
			
			
				// profiling bucket hash table
			enablegc bool
			
				Statistics about GC overhead.
				// updated atomically or during STW
			
				// fraction of CPU time used by GC
			
				Statistics about malloc heap.
			
				// heapInUse at mark termination of the previous GC
			
				// last gc (monotonic time)
			
				Protected by mheap or worldsema during GC.
				// last gc (in unix time)
			mcache_sys sysMemStat
			
				Statistics about allocation of low-level fixed-size structures.
			
				// number of user-forced GCs
			numgc uint32
			
				Miscellaneous statistics.
				// updated atomically or during STW
			
				// circular buffer of recent gc end times (nanoseconds since 1970)
			
				// circular buffer of recent gc pause lengths
			pause_total_ns uint64
			
				Statistics about stacks.
				// only counts newosproc0 stack in mstats; differs from MemStats.StackSys
		
			
			  var memstats
	
		mTraceState is per-M state for the tracer.
		
			
			
				// Per-M traceBuf for writing. Indexed by trace.gen%2.
			
				// Snapshot of alllink or freelink.
			
				// gp.throwsplit upon calling traceLocker.writer. For debugging.
			
				// Whether we've reentered tracing from within tracing.
			
				// seqlock indicating that this M is writing to a trace buffer.
	
		muintptr is a *m that is not tracked by the garbage collector.
		
		Because we do free Ms, there are some additional constrains on
		muintptrs:
		
		 1. Never hold an muintptr locally across a safe point.
		
		 2. Any muintptr in the heap must be owned by the M itself so it can
		    ensure it is not in use when the last true *m is released.
		
			
			( muintptr) ptr() *m
			(*muintptr) set(m *m)
		
			
			func mutexWaitListHead(v uintptr) muintptr
	
		Mutual exclusion locks.  In the uncontended case,
		as fast as spin locks (just a few user-level instructions),
		but on the contention path they sleep in the kernel.
		A zeroed Mutex is unlocked (no need to initialize each lock).
		Initialization is helpful for static lock ranking, but not required.
		
			
			
				Futex-based impl treats it as uint32 key,
				while sema-based impl as M* waitm.
				Used to be a union, but unions break precise GC.
			
				Empty struct if lock ranking is disabled, otherwise includes the lock rank
		
			
			func assertLockHeld(l *mutex)
			func assertWorldStoppedOrLockHeld(l *mutex)
			func getLockRank(l *mutex) lockRank
			func goparkunlock(lock *mutex, reason waitReason, traceReason traceBlockReason, traceskip int)
			func lock(l *mutex)
			func lock2(l *mutex)
			func lockInit(l *mutex, rank lockRank)
			func lockWithRank(l *mutex, rank lockRank)
			func lockWithRankMayAcquire(l *mutex, rank lockRank)
			func mutexContended(l *mutex) bool
			func mutexPreferLowLatency(l *mutex) bool
			func unlock(l *mutex)
			func unlock2(l *mutex)
			func unlock2Wake(l *mutex)
			func unlockWithRank(l *mutex)
		
			
			  var allglock
			  var allpLock
			  var deadlock
			  var debuglock
			  var finlock
			  var itabLock
			  var netpollInitLock
			  var paniclk
			  var profBlockLock
			  var profInsertLock
			  var profMemActiveLock
			  var raceFiniLock
	
		mWaitList is part of the M struct, and holds the list of Ms that are waiting
		for a particular runtime.mutex.
		
		When an M is unable to immediately obtain a lock, it adds itself to the list
		of Ms waiting for the lock. It does that via this struct's next field,
		forming a singly-linked list with the mutex's key field pointing to the head
		of the list.
		
			
			
				// next m waiting for lock
	
		
			
			func goexit(neverCallThisFunction)
	
		sleep and wakeup on one-time events.
		before any calls to notesleep or notewakeup,
		must call noteclear to initialize the Note.
		then, exactly one thread can call notesleep
		and exactly one thread can call notewakeup (once).
		once notewakeup has been called, the notesleep
		will return.  future notesleep will return immediately.
		subsequent noteclear must be called only after
		previous notesleep has returned, e.g. it's disallowed
		to call noteclear straight after notewakeup.
		
		notetsleep is like notesleep but wakes up after
		a given number of nanoseconds even if the event
		has not yet happened.  if a goroutine uses notetsleep to
		wake up early, it must wait to call noteclear until it
		can be sure that no other goroutine is calling
		notewakeup.
		
		notesleep/notetsleep are generally called on g0,
		notetsleepg is similar to notetsleep but is called on user g.
		
			
			
				Futex-based impl treats it as uint32 key,
				while sema-based impl as M* waitm.
		
			
			func noteclear(n *note)
			func notesleep(n *note)
			func notetsleep(n *note, ns int64) bool
			func notetsleep_internal(n *note, ns int64) bool
			func notetsleepg(n *note, ns int64) bool
			func notewakeup(n *note)
			func sigNoteSetup(*note)
			func sigNoteSleep(*note)
			func sigNoteWakeup(*note)
	
		notifyList is a ticket-based notification list used to implement sync.Cond.
		
		It must be kept in sync with the sync package.
		
			
			head *sudog
			
				List of parked waiters.
			
				notify is the ticket number of the next waiter to be notified. It can
				be read outside the lock, but is only written to with lock held.
				
				Both wait & notify can wrap around, and such cases will be correctly
				handled as long as their "unwrapped" difference is bounded by 2^31.
				For this not to be the case, we'd need to have 2^31+ goroutines
				blocked on the same condvar, which is currently not possible.
			tail *sudog
			
				wait is the ticket number of the next waiter. It is atomically
				incremented outside the lock.
		
			
			func notifyListAdd(l *notifyList) uint32
			func notifyListNotifyAll(l *notifyList)
			func notifyListNotifyOne(l *notifyList)
			func notifyListWait(l *notifyList, t uint32)
	
		notInHeap is off-heap memory allocated by a lower-level allocator
		like sysAlloc or persistentAlloc.
		
		In general, it's better to use real types which embed
		internal/runtime/sys.NotInHeap, but this serves as a generic type
		for situations where that isn't possible (like in the allocators).
		
		TODO: Use this as the return type of sysAlloc, persistentAlloc, etc?
		
			
			(*notInHeap) add(bytes uintptr) *notInHeap
		
			
			func persistentalloc1(size, align uintptr, sysStat *sysMemStat) *notInHeap
		
			
			  var persistentChunks *notInHeap
	
		A notInHeapSlice is a slice backed by internal/runtime/sys.NotInHeap memory.
		
			
			array *notInHeap
			cap int
			len int
	
		offAddr represents an address in a contiguous view
		of the address space on systems where the address space is
		segmented. On other systems, it's just a normal address.
		
			
			
				a is just the virtual address, but should never be used
				directly. Call addr() to get this value instead.
		
			
			
				add adds a uintptr offset to the offAddr.
			
				addr returns the virtual address for this offset address.
			
				diff returns the amount of bytes in between the
				two offAddrs.
			
				equal returns true if the two offAddr values are equal.
			
				lessEqual returns true if l1 is less than or equal to l2 in
				the offset address space.
			
				lessThan returns true if l1 is less than l2 in the offset
				address space.
			
				sub subtracts a uintptr offset from the offAddr.
		
			
			func levelIndexToOffAddr(level, idx int) offAddr
			func maxSearchAddr() offAddr
		
			
			func offAddrToLevelIndex(level int, addr offAddr) int
		
			
			  var maxOffAddr
			  var minOffAddr
	
		
			
			
				// pool of available defer structs (see panic.go)
			deferpoolbuf [32]*_defer
			
				Available G's (status == Gdead)
			
				Per-P GC state
				// Nanoseconds in assistAlloc
			
				// Nanoseconds in fractional mark worker (atomic)
			
				gcMarkWorkerMode is the mode for the next mark worker to run in.
				That is, this is used to communicate with the worker goroutine
				selected for immediate execution by
				gcController.findRunnableGCWorker. When scheduling other goroutines,
				this field must be set to gcMarkWorkerNotWorker.
			
				gcMarkWorkerStartTime is the nanotime() at which the most recent
				mark worker started.
			
				gcStopTime is the nanotime timestamp that this P last entered _Pgcstop.
			
				gcw is this P's GC work buffer cache. The work buffer is
				filled by write barriers, drained by mutator assists, and
				disposed on certain GC state transitions.
			
				Cache of goroutine ids, amortizes accesses to runtime·sched.goidgen.
			goidcacheend uint64
			id int32
			
				limiterEvent tracks events for the GC CPU limiter.
			link puintptr
			
				// back-link to associated m (nil if idle)
			
				maxStackScanDelta accumulates the amount of stack space held by
				live goroutines (i.e. those eligible for stack scanning).
				Flushed to gcController.maxStackScan once maxStackScanSlack
				or -maxStackScanSlack is reached.
			mcache *mcache
			
				Cache of mspan objects from the heap.
			
				// per-P to avoid mutex
			pcache pageCache
			
				Cache of a single pinner object to reduce allocations from repeated
				pinner creation.
			
				preempt is set to indicate that this P should be enter the
				scheduler ASAP (regardless of what G is running on it).
			raceprocctx uintptr
			
				// if 1, run sched.safePointFn at next safe point
			
				runnext, if non-nil, is a runnable G that was ready'd by
				the current G and should be run next instead of what's in
				runq if there's time remaining in the running G's time
				slice. It will inherit the time left in the current time
				slice. If a set of goroutines is locked in a
				communicate-and-wait pattern, this schedules that set as a
				unit and eliminates the (potentially large) scheduling
				latency that otherwise arises from adding the ready'd
				goroutines to the end of the run queue.
				
				Note that while other P's may atomically CAS this to zero,
				only the owner P can CAS it to a valid G.
			runq [256]guintptr
			
				Queue of runnable goroutines. Accessed without lock.
			runqtail uint32
			
				gc-time statistics about current goroutines
				Note that this differs from maxStackScan in that this
				accumulates the actual stack observed to be used at GC time (hi - sp),
				not an instantaneous measure of the total stack size that might need
				to be scanned (hi - lo).
				// stack size of goroutines scanned by this P
			
				// number of goroutines scanned by this P
			
				// incremented on every scheduler call
			
				statsSeq is a counter indicating whether this P is currently
				writing any stats. Its value is even when not, odd when it is.
			
				// one of pidle/prunning/...
			sudogbuf [128]*sudog
			sudogcache []*sudog
			
				// incremented on every system call
			
				// last tick observed by sysmon
			
				Timer heap.
			trace pTraceState
			
				wbBuf is this P's GC write barrier buffer.
				
				TODO: Consider caching this in the running G.
		
			
			
				destroy releases all of the resources associated with pp and
				transitions it to status _Pdead.
				
				sched.lock must be held and the world must be stopped.
			
				init initializes pp, which may be a freshly allocated p or a
				previously destroyed p, and transitions it to status _Pgcstop.
		
			
			func checkIdleGCNoP() (*p, *g)
			func checkRunqsNoP(allpSnapshot []*p, idlepMaskSnapshot pMask) *p
			func pidleget(now int64) (*p, int64)
			func pidlegetSpinning(now int64) (*p, int64)
			func procresize(nprocs int32) *p
			func releasep() *p
			func releasepNoTrace() *p
		
			
			func acquirep(pp *p)
			func allocm(pp *p, fn func(), id int64) *m
			func checkRunqsNoP(allpSnapshot []*p, idlepMaskSnapshot pMask) *p
			func checkTimersNoP(allpSnapshot []*p, timerpMaskSnapshot pMask, pollUntil int64) int64
			func exitsyscallfast(oldp *p) bool
			func gcMarkWorkAvailable(p *p) bool
			func gfget(pp *p) *g
			func gfpurge(pp *p)
			func gfput(pp *p, gp *g)
			func globrunqget(pp *p, max int32) *g
			func handoffp(pp *p)
			func newm(fn func(), pp *p, id int64)
			func pidleput(pp *p, now int64) int64
			func preemptone(pp *p) bool
			func runqdrain(pp *p) (drainQ gQueue, n uint32)
			func runqempty(pp *p) bool
			func runqget(pp *p) (gp *g, inheritTime bool)
			func runqgrab(pp *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32
			func runqput(pp *p, gp *g, next bool)
			func runqputbatch(pp *p, q *gQueue, qsize int)
			func runqputslow(pp *p, gp *g, h, t uint32) bool
			func runqsteal(pp, p2 *p, stealRunNextG bool) *g
			func startm(pp *p, spinning, lockheld bool)
			func traceCPUSample(gp *g, mp *m, pp *p, stk []uintptr)
			func wbBufFlush1(pp *p)
			func wirep(pp *p)
	
		
			
			
				chunkHugePages indicates whether page bitmap chunks should be backed
				by huge pages.
			
				chunks is a slice of bitmap chunks.
				
				The total size of chunks is quite large on most 64-bit platforms
				(O(GiB) or more) if flattened, so rather than making one large mapping
				(which has problems on some platforms, even when PROT_NONE) we use a
				two-level sparse array approach similar to the arena index in mheap.
				
				To find the chunk containing a memory address `a`, do:
				  chunkOf(chunkIndex(a))
				
				Below is a table describing the configuration for chunks for various
				heapAddrBits supported by the runtime.
				
				heapAddrBits | L1 Bits | L2 Bits | L2 Entry Size
				------------------------------------------------
				32           | 0       | 10      | 128 KiB
				33 (iOS)     | 0       | 11      | 256 KiB
				48           | 13      | 13      | 1 MiB
				
				There's no reason to use the L1 part of chunks on 32-bit, the
				address space is small so the L2 is small. For platforms with a
				48-bit address space, we pick the L1 such that the L2 is 1 MiB
				in size, which is a good balance between low granularity without
				making the impact on BSS too high (note the L1 is stored directly
				in pageAlloc).
				
				To iterate over the bitmap, use inUse to determine which ranges
				are currently available. Otherwise one might iterate over unused
				ranges.
				
				Protected by mheapLock.
				
				TODO(mknyszek): Consider changing the definition of the bitmap
				such that 1 means free and 0 means in-use so that summaries and
				the bitmaps align better on zero-values.
			
				start and end represent the chunk indices
				which pageAlloc knows about. It assumes
				chunks in the range [start, end) are
				currently ready to use.
			
				inUse is a slice of ranges of address space which are
				known by the page allocator to be currently in-use (passed
				to grow).
				
				We care much more about having a contiguous heap in these cases
				and take additional measures to ensure that, so in nearly all
				cases this should have just 1 element.
				
				All access is protected by the mheapLock.
			
				mheap_.lock. This level of indirection makes it possible
				to test pageAlloc independently of the runtime allocator.
			
				scav stores the scavenger state.
			
				The address to start an allocation search with. It must never
				point to any memory that is not contained in inUse, i.e.
				inUse.contains(searchAddr.addr()) must always be true. The one
				exception to this rule is that it may take on the value of
				maxOffAddr to indicate that the heap is exhausted.
				
				We guarantee that all valid heap addresses below this value
				are allocated and not worth searching.
			
				start and end represent the chunk indices
				which pageAlloc knows about. It assumes
				chunks in the range [start, end) are
				currently ready to use.
			
				Radix tree of summaries.
				
				Each slice's cap represents the whole memory reservation.
				Each slice's len reflects the allocator's maximum known
				mapped heap address for that level.
				
				The backing store of each summary level is reserved in init
				and may or may not be committed in grow (small address spaces
				may commit all the memory in init).
				
				The purpose of keeping len <= cap is to enforce bounds checks
				on the top end of the slice so that instead of an unknown
				runtime segmentation fault, we get a much friendlier out-of-bounds
				error.
				
				To iterate over a summary level, use inUse to determine which ranges
				are currently available. Otherwise one might try to access
				memory which is only Reserved which may result in a hard fault.
				
				We may still get segmentation faults < len since some of that
				memory may not be committed yet.
			
				summaryMappedReady is the number of bytes mapped in the Ready state
				in the summary structure. Used only for testing currently.
				
				Protected by mheapLock.
			
				sysStat is the runtime memstat to update when new system
				memory is committed by the pageAlloc for allocation metadata.
			
				Whether or not this struct is being used in tests.
		
			
			
				alloc allocates npages worth of memory from the page heap, returning the base
				address for the allocation and the amount of scavenged memory in bytes
				contained in the region [base address, base address + npages*pageSize).
				
				Returns a 0 base address on failure, in which case other returned values
				should be ignored.
				
				p.mheapLock must be held.
				
				Must run on the system stack because p.mheapLock must be held.
			
				allocRange marks the range of memory [base, base+npages*pageSize) as
				allocated. It also updates the summaries to reflect the newly-updated
				bitmap.
				
				Returns the amount of scavenged memory in bytes present in the
				allocated range.
				
				p.mheapLock must be held.
			
				allocToCache acquires a pageCachePages-aligned chunk of free pages which
				may not be contiguous, and returns a pageCache structure which owns the
				chunk.
				
				p.mheapLock must be held.
				
				Must run on the system stack because p.mheapLock must be held.
			
				chunkOf returns the chunk at the given chunk index.
				
				The chunk index must be valid or this method may throw.
			
				enableChunkHugePages enables huge pages for the chunk bitmap mappings (disabled by default).
				
				This function is idempotent.
				
				A note on latency: for sufficiently small heaps (<10s of GiB) this function will take constant
				time, but may take time proportional to the size of the mapped heap beyond that.
				
				The heap lock must not be held over this operation, since it will briefly acquire
				the heap lock.
				
				Must be called on the system stack because it acquires the heap lock.
			
				find searches for the first (address-ordered) contiguous free region of
				npages in size and returns a base address for that region.
				
				It uses p.searchAddr to prune its search and assumes that no palloc chunks
				below chunkIndex(p.searchAddr) contain any free memory at all.
				
				find also computes and returns a candidate p.searchAddr, which may or
				may not prune more of the address space than p.searchAddr already does.
				This candidate is always a valid p.searchAddr.
				
				find represents the slow path and the full radix tree search.
				
				Returns a base address of 0 on failure, in which case the candidate
				searchAddr returned is invalid and must be ignored.
				
				p.mheapLock must be held.
			
				findMappedAddr returns the smallest mapped offAddr that is
				>= addr. That is, if addr refers to mapped memory, then it is
				returned. If addr is higher than any mapped region, then
				it returns maxOffAddr.
				
				p.mheapLock must be held.
			
				free returns npages worth of memory starting at base back to the page heap.
				
				p.mheapLock must be held.
				
				Must run on the system stack because p.mheapLock must be held.
			
				grow sets up the metadata for the address range [base, base+size).
				It may allocate metadata, in which case *p.sysStat will be updated.
				
				p.mheapLock must be held.
			(*pageAlloc) init(mheapLock *mutex, sysStat *sysMemStat, test bool)
			
				scavenge scavenges nbytes worth of free pages, starting with the
				highest address first. Successive calls continue from where it left
				off until the heap is exhausted. force makes all memory available to
				scavenge, ignoring huge page heuristics.
				
				Returns the amount of memory scavenged in bytes.
				
				scavenge always tries to scavenge nbytes worth of memory, and will
				only fail to do so if the heap is exhausted for now.
			
				scavengeOne walks over the chunk at chunk index ci and searches for
				a contiguous run of pages to scavenge. It will try to scavenge
				at most max bytes at once, but may scavenge more to avoid
				breaking huge pages. Once it scavenges some memory it returns
				how much it scavenged in bytes.
				
				searchIdx is the page index to start searching from in ci.
				
				Returns the number of bytes scavenged.
				
				Must run on the systemstack because it acquires p.mheapLock.
			
				sysGrow performs architecture-dependent operations on heap
				growth for the page allocator, such as mapping in new memory
				for summaries. It also updates the length of the slices in
				p.summary.
				
				base is the base of the newly-added heap memory and limit is
				the first address past the end of the newly-added heap memory.
				Both must be aligned to pallocChunkBytes.
				
				The caller must update p.start and p.end after calling sysGrow.
			
				sysInit performs architecture-dependent initialization of fields
				in pageAlloc. pageAlloc should be uninitialized except for sysStat
				if any runtime statistic should be updated.
			
				tryChunkOf returns the bitmap data for the given chunk.
				
				Returns nil if the chunk data has not been mapped.
			
				update updates heap metadata. It must be called each time the bitmap
				is updated.
				
				If contig is true, update does some optimizations assuming that there was
				a contiguous allocation or free between addr and addr+npages. alloc indicates
				whether the operation performed was an allocation or a free.
				
				p.mheapLock must be held.
	
		pageBits is a bitmap representing one bit per page in a palloc chunk.
		
			
			
				block64 returns the 64-bit aligned block of bits containing the i'th bit.
			
				clear clears bit i of pageBits.
			
				clearAll frees all the bits of b.
			
				clearBlock64 clears the 64-bit aligned block of bits containing the i'th bit that
				are set in v.
			
				clearRange clears bits in the range [i, i+n).
			
				get returns the value of the i'th bit in the bitmap.
			
				popcntRange counts the number of set bits in the
				range [i, i+n).
			
				set sets bit i of pageBits.
			
				setAll sets all the bits of b.
			
				setBlock64 sets the 64-bit aligned block of bits containing the i'th bit that
				are set in v.
			
				setRange sets bits in the range [i, i+n).
	
		pageCache represents a per-p cache of pages the allocator can
		allocate from without a lock. More specifically, it represents
		a pageCachePages*pageSize chunk of memory with 0 or more free
		pages in it.
		
			
			
				// base address of the chunk
			
				// 64-bit bitmap representing free pages (1 means free)
			
				// 64-bit bitmap representing scavenged pages (1 means scavenged)
		
			
			
				alloc allocates npages from the page cache and is the main entry
				point for allocation.
				
				Returns a base address and the amount of scavenged memory in the
				allocated region in bytes.
				
				Returns a base address of zero on failure, in which case the
				amount of scavenged memory should be ignored.
			
				allocN is a helper which attempts to allocate npages worth of pages
				from the cache. It represents the general case for allocating from
				the page cache.
				
				Returns a base address and the amount of scavenged memory in the
				allocated region in bytes.
			
				empty reports whether the page cache has no free pages.
			
				flush empties out unallocated free pages in the given cache
				into s. Then, it clears the cache, such that empty returns
				true.
				
				p.mheapLock must be held.
				
				Must run on the system stack because p.mheapLock must be held.
	
		pallocBits is a bitmap that tracks page allocations for at most one
		palloc chunk.
		
		The precise representation is an implementation detail, but for the
		sake of documentation, 0s are free pages and 1s are allocated pages.
		
			
			
				allocAll allocates all the bits of b.
			
				allocPages64 allocates a 64-bit block of 64 pages aligned to 64 pages according
				to the bits set in alloc. The block set is the one containing the i'th page.
			
				allocRange allocates the range [i, i+n).
			
				find searches for npages contiguous free pages in pallocBits and returns
				the index where that run starts, as well as the index of the first free page
				it found in the search. searchIdx represents the first known free page and
				where to begin the next search from.
				
				If find fails to find any free space, it returns an index of ^uint(0) and
				the new searchIdx should be ignored.
				
				Note that if npages == 1, the two returned values will always be identical.
			
				find1 is a helper for find which searches for a single free page
				in the pallocBits and returns the index.
				
				See find for an explanation of the searchIdx parameter.
			
				findLargeN is a helper for find which searches for npages contiguous free pages
				in this pallocBits and returns the index where that run starts, as well as the
				index of the first free page it found it its search.
				
				See alloc for an explanation of the searchIdx parameter.
				
				Returns a ^uint(0) index on failure and the new searchIdx should be ignored.
				
				findLargeN assumes npages > 64, where any such run of free pages
				crosses at least one aligned 64-bit boundary in the bits.
			
				findSmallN is a helper for find which searches for npages contiguous free pages
				in this pallocBits and returns the index where that run of contiguous pages
				starts as well as the index of the first free page it finds in its search.
				
				See find for an explanation of the searchIdx parameter.
				
				Returns a ^uint(0) index on failure and the new searchIdx should be ignored.
				
				findSmallN assumes npages <= 64, where any such contiguous run of pages
				crosses at most one aligned 64-bit boundary in the bits.
			
				free frees the range [i, i+n) of pages in the pallocBits.
			
				free1 frees a single page in the pallocBits at i.
			
				freeAll frees all the bits of b.
			
				pages64 returns a 64-bit bitmap representing a block of 64 pages aligned
				to 64 pages. The returned block of pages is the one containing the i'th
				page in this pallocBits. Each bit represents whether the page is in-use.
			
				summarize returns a packed summary of the bitmap in pallocBits.
	
		pallocData encapsulates pallocBits and a bitmap for
		whether or not a given page is scavenged in a single
		structure. It's effectively a pallocBits with
		additional functionality.
		
		Update the comment on (*pageAlloc).chunks should this
		structure change.
		
			
			pallocBits pallocBits
			scavenged pageBits
		
			
			
				allocAll sets every bit in the bitmap to 1 and updates
				the scavenged bits appropriately.
			
				allocPages64 allocates a 64-bit block of 64 pages aligned to 64 pages according
				to the bits set in alloc. The block set is the one containing the i'th page.
			
				allocRange sets bits [i, i+n) in the bitmap to 1 and
				updates the scavenged bits appropriately.
			
				find searches for npages contiguous free pages in pallocBits and returns
				the index where that run starts, as well as the index of the first free page
				it found in the search. searchIdx represents the first known free page and
				where to begin the next search from.
				
				If find fails to find any free space, it returns an index of ^uint(0) and
				the new searchIdx should be ignored.
				
				Note that if npages == 1, the two returned values will always be identical.
			
				find1 is a helper for find which searches for a single free page
				in the pallocBits and returns the index.
				
				See find for an explanation of the searchIdx parameter.
			
				findLargeN is a helper for find which searches for npages contiguous free pages
				in this pallocBits and returns the index where that run starts, as well as the
				index of the first free page it found it its search.
				
				See alloc for an explanation of the searchIdx parameter.
				
				Returns a ^uint(0) index on failure and the new searchIdx should be ignored.
				
				findLargeN assumes npages > 64, where any such run of free pages
				crosses at least one aligned 64-bit boundary in the bits.
			
				findScavengeCandidate returns a start index and a size for this pallocData
				segment which represents a contiguous region of free and unscavenged memory.
				
				searchIdx indicates the page index within this chunk to start the search, but
				note that findScavengeCandidate searches backwards through the pallocData. As
				a result, it will return the highest scavenge candidate in address order.
				
				min indicates a hard minimum size and alignment for runs of pages. That is,
				findScavengeCandidate will not return a region smaller than min pages in size,
				or that is min pages or greater in size but not aligned to min. min must be
				a non-zero power of 2 <= maxPagesPerPhysPage.
				
				max is a hint for how big of a region is desired. If max >= pallocChunkPages, then
				findScavengeCandidate effectively returns entire free and unscavenged regions.
				If max < pallocChunkPages, it may truncate the returned region such that size is
				max. However, findScavengeCandidate may still return a larger region if, for
				example, it chooses to preserve huge pages, or if max is not aligned to min (it
				will round up). That is, even if max is small, the returned size is not guaranteed
				to be equal to max. max is allowed to be less than min, in which case it is as if
				max == min.
			
				findSmallN is a helper for find which searches for npages contiguous free pages
				in this pallocBits and returns the index where that run of contiguous pages
				starts as well as the index of the first free page it finds in its search.
				
				See find for an explanation of the searchIdx parameter.
				
				Returns a ^uint(0) index on failure and the new searchIdx should be ignored.
				
				findSmallN assumes npages <= 64, where any such contiguous run of pages
				crosses at most one aligned 64-bit boundary in the bits.
			
				free frees the range [i, i+n) of pages in the pallocBits.
			
				free1 frees a single page in the pallocBits at i.
			
				freeAll frees all the bits of b.
			
				pages64 returns a 64-bit bitmap representing a block of 64 pages aligned
				to 64 pages. The returned block of pages is the one containing the i'th
				page in this pallocBits. Each bit represents whether the page is in-use.
			
				summarize returns a packed summary of the bitmap in pallocBits.
	
		pallocSum is a packed summary type which packs three numbers: start, max,
		and end into a single 8-byte value. Each of these values are a summary of
		a bitmap and are thus counts, each of which may have a maximum value of
		2^21 - 1, or all three may be equal to 2^21. The latter case is represented
		by just setting the 64th bit.
		
			
			
				end extracts the end value from a packed sum.
			
				max extracts the max value from a packed sum.
			
				start extracts the start value from a packed sum.
			
				unpack unpacks all three values from the summary.
		
			
			func mergeSummaries(sums []pallocSum, logMaxPagesPerSum uint) pallocSum
			func packPallocSum(start, max, end uint) pallocSum
		
			
			func mergeSummaries(sums []pallocSum, logMaxPagesPerSum uint) pallocSum
		
			
			const freeChunkSum
	
		pcHeader holds data used by the pclntab lookups.
		
			
			
				// offset to the cutab variable from pcHeader
			
				// offset to the filetab variable from pcHeader
			
				// offset to the funcnametab variable from pcHeader
			
				// 0xFFFFFFF1
			
				// min instruction size
			
				// number of entries in the file tab
			
				// number of functions in the module
			
				// 0,0
			
				// 0,0
			
				// offset to the pclntab variable from pcHeader
			
				// offset to the pctab variable from pcHeader
			
				// size of a ptr in bytes
			
				// base for function entry PC offsets in this module, equal to moduledata.text
	
		
			
			entries [2][8]pcvalueCacheEnt
			inUse int
	
		
			
			off uint32
			
				targetpc and off together are the key of this cache entry.
			
				// The value of this entry.
			
				// The PC at which val starts
	
		perThreadSyscallArgs contains the system call number, arguments, and
		expected return values for a system call to be executed on all threads.
		
			
			a1 uintptr
			a2 uintptr
			a3 uintptr
			a4 uintptr
			a5 uintptr
			a6 uintptr
			r1 uintptr
			r2 uintptr
			trap uintptr
		
			
			  var perThreadSyscall
	
		
			
			
				// Integral of the error from t=0 to now.
			
				Error flags.
				// Set if errIntegral ever overflowed.
			
				// Set if an operation with the input overflowed.
			
				// Proportional constant.
			
				// Output boundaries.
			
				// Output boundaries.
			
				// Integral time constant.
			
				// Reset time.
		
			
			
				next provides a new sample to the controller.
				
				input is the sample, setpoint is the desired point, and period is how much
				time (in whatever unit makes the most sense) has passed since the last sample.
				
				Returns a new value for the variable it's controlling, and whether the operation
				completed successfully. One reason this might fail is if error has been growing
				in an unbounded manner, to the point of overflow.
				
				In the specific case of an error overflow occurs, the errOverflow field will be
				set and the rest of the controller's internal state will be fully reset.
			
				reset resets the controller state, except for controller error flags.
	
		pinnerBits is the same type as gcBits but has different methods.
		
			
			x uint8
		
			
			
				ofObject returns the pinState of the n'th object.
				nosplit, because it's called by isPinned, which is nosplit
	
		
			
			byteVal uint8
			bytep *uint8
			mask uint8
		
			
			(*pinState) isMultiPinned() bool
			
				nosplit, because it's called by isPinned, which is nosplit
			
				set sets the pin bit of the pinState to val. If multipin is true, it
				sets/unsets the multipin bit instead.
			(*pinState) setMultiPinned(val bool)
			(*pinState) setPinned(val bool)
	
		plainError represents a runtime error described a string without
		the prefix "runtime error: " after invoking errorString.Error().
		See Issue #14965.
		
			( plainError) Error() string
			( plainError) RuntimeError()
		
			 plainError : Error
			 plainError : error
	
		pMask is an atomic bitstring with one bit per P.
		
			
			
				clear clears P id's bit.
			
				read returns true if P id's bit is set.
			
				set sets P id's bit.
		
			
			func checkRunqsNoP(allpSnapshot []*p, idlepMaskSnapshot pMask) *p
			func checkTimersNoP(allpSnapshot []*p, timerpMaskSnapshot pMask, pollUntil int64) int64
		
			
			  var idlepMask
			  var timerpMask
	
		
			
			first *pollDesc
			lock mutex
		
			
			(*pollCache) alloc() *pollDesc
			(*pollCache) free(pd *pollDesc)
		
			
			  var pollcache
	
		Network poller descriptor.
		
		No heap pointers.
		
			
			
				atomicInfo holds bits from closing, rd, and wd,
				which are only ever written while holding the lock,
				summarized for use by netpollcheckerr,
				which cannot acquire the lock.
				After writing these fields under lock in a way that
				might change the summary, code must call publishInfo
				before releasing the lock.
				Code that changes fields and then calls netpollunblock
				(while still holding the lock) must call publishInfo
				before calling netpollunblock, because publishInfo is what
				stops netpollblock from blocking anew
				(by changing the result of netpollcheckerr).
				atomicInfo also holds the eventErr bit,
				recording whether a poll event on the fd got an error;
				atomicInfo is the only source of truth for that bit.
				// atomic pollInfo
			closing bool
			
				// constant for pollDesc usage lifetime
			
				// protects against stale pollDesc
			
				// in pollcache, protected by pollcache.lock
			
				// protects the following fields
			
				// read deadline (a nanotime in the future, -1 when expired)
			
				rg, wg are accessed atomically and hold g pointers.
				(Using atomic.Uintptr here is similar to using guintptr elsewhere.)
				// pdReady, pdWait, G waiting for read or pdNil
			
				// whether rt is running
			
				// protects from stale read timers
			
				// read deadline timer
			
				// storage for indirect interface. See (*pollDesc).makeArg.
			
				// user settable cookie
			
				// write deadline (a nanotime in the future, -1 when expired)
			
				// pdReady, pdWait, G waiting for write or pdNil
			
				// whether wt is running
			
				// protects from stale write timers
			
				// write deadline timer
		
			
			
				info returns the pollInfo corresponding to pd.
			
				makeArg converts pd to an interface{}.
				makeArg does not do any allocation. Normally, such
				a conversion requires an allocation because pointers to
				types which embed internal/runtime/sys.NotInHeap (which pollDesc is)
				must be stored in interfaces indirectly. See issue 42076.
			
				publishInfo updates pd.atomicInfo (returned by pd.info)
				using the other values in pd.
				It must be called while holding pd.lock,
				and it must be called after changing anything
				that might affect the info bits.
				In practice this means after changing closing
				or changing rd or wd from < 0 to >= 0.
			
				setEventErr sets the result of pd.info().eventErr() to b.
				We only change the error bit if seq == 0 or if seq matches pollFDSeq
				(issue #59545).
		
			
			func poll_runtime_pollOpen(fd uintptr) (*pollDesc, int)
		
			
			func netpollarm(pd *pollDesc, mode int)
			func netpollblock(pd *pollDesc, mode int32, waitio bool) bool
			func netpollcheckerr(pd *pollDesc, mode int32) int
			func netpolldeadlineimpl(pd *pollDesc, seq uintptr, read, write bool)
			func netpollopen(fd uintptr, pd *pollDesc) uintptr
			func netpollready(toRun *gList, pd *pollDesc, mode int32) int32
			func netpollunblock(pd *pollDesc, mode int32, ioready bool, delta *int32) *g
			func poll_runtime_pollClose(pd *pollDesc)
			func poll_runtime_pollReset(pd *pollDesc, mode int) int
			func poll_runtime_pollSetDeadline(pd *pollDesc, d int64, mode int)
			func poll_runtime_pollUnblock(pd *pollDesc)
			func poll_runtime_pollWait(pd *pollDesc, mode int) int
			func poll_runtime_pollWaitCanceled(pd *pollDesc, mode int)
	
		pollInfo is the bits needed by netpollcheckerr, stored atomically,
		mostly duplicating state that is manipulated under lock in pollDesc.
		The one exception is the pollEventErr bit, which is maintained only
		in the pollInfo.
		
			
			( pollInfo) closing() bool
			( pollInfo) eventErr() bool
			( pollInfo) expiredReadDeadline() bool
			( pollInfo) expiredWriteDeadline() bool
	
		A profAtomic is the atomically-accessed word holding a profIndex.
		
			
			(*profAtomic) cas(old, new profIndex) bool
			(*profAtomic) load() profIndex
			(*profAtomic) store(new profIndex)
	
		A profBuf is a lock-free buffer for profiling events,
		safe for concurrent use by one reader and one writer.
		The writer may be a signal handler running without a user g.
		The reader is assumed to be a user g.
		
		Each logged event corresponds to a fixed size header, a list of
		uintptrs (typically a stack), and exactly one unsafe.Pointer tag.
		The header and uintptrs are stored in the circular buffer data and the
		tag is stored in a circular buffer tags, running in parallel.
		In the circular buffer data, each event takes 2+hdrsize+len(stk)
		words: the value 2+hdrsize+len(stk), then the time of the event, then
		hdrsize words giving the fixed-size header, and then len(stk) words
		for the stack.
		
		The current effective offsets into the tags and data circular buffers
		for reading and writing are stored in the high 30 and low 32 bits of r and w.
		The bottom bits of the high 32 are additional flag bits in w, unused in r.
		"Effective" offsets means the total number of reads or writes, mod 2^length.
		The offset in the buffer is the effective offset mod the length of the buffer.
		To make wraparound mod 2^length match wraparound mod length of the buffer,
		the length of the buffer must be a power of two.
		
		If the reader catches up to the writer, a flag passed to read controls
		whether the read blocks until more data is available. A read returns a
		pointer to the buffer data itself; the caller is assumed to be done with
		that data at the next read. The read offset rNext tracks the next offset to
		be returned by read. By definition, r ≤ rNext ≤ w (before wraparound),
		and rNext is only used by the reader, so it can be accessed without atomics.
		
		If the writer gets ahead of the reader, so that the buffer fills,
		future writes are discarded and replaced in the output stream by an
		overflow entry, which has size 2+hdrsize+1, time set to the time of
		the first discarded write, a header of all zeroed words, and a "stack"
		containing one word, the number of discarded writes.
		
		Between the time the buffer fills and the buffer becomes empty enough
		to hold more data, the overflow entry is stored as a pending overflow
		entry in the fields overflow and overflowTime. The pending overflow
		entry can be turned into a real record by either the writer or the
		reader. If the writer is called to write a new record and finds that
		the output buffer has room for both the pending overflow entry and the
		new record, the writer emits the pending overflow entry and the new
		record into the buffer. If the reader is called to read data and finds
		that the output buffer is empty but that there is a pending overflow
		entry, the reader will return a synthesized record for the pending
		overflow entry.
		
		Only the writer can create or add to a pending overflow entry, but
		either the reader or the writer can clear the pending overflow entry.
		A pending overflow entry is indicated by the low 32 bits of 'overflow'
		holding the number of discarded writes, and overflowTime holding the
		time of the first discarded write. The high 32 bits of 'overflow'
		increment each time the low 32 bits transition from zero to non-zero
		or vice versa. This sequence number avoids ABA problems in the use of
		compare-and-swap to coordinate between reader and writer.
		The overflowTime is only written when the low 32 bits of overflow are
		zero, that is, only when there is no pending overflow entry, in
		preparation for creating a new one. The reader can therefore fetch and
		clear the entry atomically using
		
			for {
				overflow = load(&b.overflow)
				if uint32(overflow) == 0 {
					// no pending entry
					break
				}
				time = load(&b.overflowTime)
				if cas(&b.overflow, overflow, ((overflow>>32)+1)<<32) {
					// pending entry cleared
					break
				}
			}
			if uint32(overflow) > 0 {
				emit entry for uint32(overflow), time
			}
		
			
			data []uint64
			eof atomic.Uint32
			
				immutable (excluding slice content)
			overflow atomic.Uint64
			
				// for use by reader to return overflow record
			overflowTime atomic.Uint64
			
				accessed atomically
			
				owned by reader
			tags []unsafe.Pointer
			
				accessed atomically
			wait note
		
			
			
				canWriteRecord reports whether the buffer has room
				for a single contiguous record with a stack of length nstk.
			
				canWriteTwoRecords reports whether the buffer has room
				for two records with stack lengths nstk1, nstk2, in that order.
				Each record must be contiguous on its own, but the two
				records need not be contiguous (one can be at the end of the buffer
				and the other can wrap around and start at the beginning of the buffer).
			
				close signals that there will be no more writes on the buffer.
				Once all the data has been read from the buffer, reads will return eof=true.
			
				hasOverflow reports whether b has any overflow records pending.
			
				incrementOverflow records a single overflow at time now.
				It is racing against a possible takeOverflow in the reader.
			(*profBuf) read(mode profBufReadMode) (data []uint64, tags []unsafe.Pointer, eof bool)
			
				takeOverflow consumes the pending overflow records, returning the overflow count
				and the time of the first overflow.
				When called by the reader, it is racing against incrementOverflow.
			
				wakeupExtra must be called after setting one of the "extra"
				atomic fields b.overflow or b.eof.
				It records the change in b.w and wakes up the reader if needed.
			
				write writes an entry to the profiling buffer b.
				The entry begins with a fixed hdr, which must have
				length b.hdrsize, followed by a variable-sized stack
				and a single tag pointer *tagPtr (or nil if tagPtr is nil).
				No write barriers allowed because this might be called from a signal handler.
		
			
			func newProfBuf(hdrsize, bufwords, tags int) *profBuf
	
		profBufReadMode specifies whether to block when no data is available to read.
		
			
			const profBufBlocking
			const profBufNonBlocking
	
		A profIndex is the packet tag and data counts and flags bits, described above.
		
			
			
				addCountsAndClearFlags returns the packed form of "x + (data, tag) - all flags".
			( profIndex) dataCount() uint32
			( profIndex) tagCount() uint32
		
			
			const profReaderSleeping
			const profWriteExtra
	
		A ptabEntry is generated by the compiler for each exported function
		and global variable in the main package of a plugin. It is used to
		initialize the plugin module's symbol map.
		
			
			name nameOff
			typ typeOff
	
		pTraceState is per-P state for the tracer.
		
			
			
				inSweep indicates that at least one sweep event has been traced.
			
				mSyscallID is the ID of the M this was bound to before entering a syscall.
			
				maySweep indicates the sweep events should be traced.
				This is used to defer the sweep start event until a span
				has actually been swept.
			
				swept and reclaimed track the number of bytes swept and reclaimed
				by sweeping in the current sweep loop (while maySweep was true).
			
				swept and reclaimed track the number of bytes swept and reclaimed
				by sweeping in the current sweep loop (while maySweep was true).
			traceSchedResourceState traceSchedResourceState
			
				seq is the sequence counter for this scheduling resource's events.
				The purpose of the sequence counter is to establish a partial order between
				events that don't obviously happen serially (same M) in the stream ofevents.
				
				There are two of these so that we can reset the counter on each generation.
				This saves space in the resulting trace by keeping the counter small and allows
				GoStatus and GoCreate events to omit a sequence number (implicitly 0).
			
				statusTraced indicates whether a status event was traced for this resource
				a particular generation.
				
				There are 3 of these because when transitioning across generations, traceAdvance
				needs to be able to reliably observe whether a status was traced for the previous
				generation, while we need to clear the value for the next generation.
		
			
			
				acquireStatus acquires the right to emit a Status event for the scheduling resource.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				nextSeq returns the next sequence number for the resource.
			
				readyNextGen readies r for the generation following gen.
			
				setStatusTraced indicates that the resource's status was already traced, for example
				when a goroutine is created.
			
				statusWasTraced returns true if the sched resource's status was already acquired for tracing.
	
		
			
			count uint32
			i uint32
			inc uint32
			pos uint32
		
			
			(*randomEnum) done() bool
			(*randomEnum) next()
			(*randomEnum) position() uint32
	
		randomOrder/randomEnum are helper types for randomized work stealing.
		They allow to enumerate all Ps in different pseudo-random orders without repetitions.
		The algorithm is based on the fact that if we have X such that X and GOMAXPROCS
		are coprime, then a sequences of (i + X) % GOMAXPROCS gives the required enumeration.
		
			
			coprimes []uint32
			count uint32
		
			
			(*randomOrder) reset(count uint32)
			(*randomOrder) start(i uint32) randomEnum
		
			
			  var stealOrder
	
		reflectMethodValue is a partial duplicate of reflect.makeFuncImpl
		and reflect.methodValue.
		
			
			
				// just args
			fn uintptr
			
				// ptrmap for both args and results
	
		rtype is a wrapper that allows us to define additional methods.
		
			
				// embedding is okay here (unlike reflect) because none of this is public
			
				// alignment of variable with this type
			
				function for comparing objects of this type
				(ptr to object A, ptr to object B) -> ==?
			
				// alignment of struct field with this type
			
				GCData stores the GC type data for the garbage collector.
				Normally, GCData points to a bitmask that describes the
				ptr/nonptr fields of the type. The bitmask will have at
				least PtrBytes/ptrSize bits.
				If the TFlagGCMaskOnDemand bit is set, GCData is instead a
				**byte and the pointer to the bitmask is one dereference away.
				The runtime will build the bitmask if needed.
				(See runtime/type.go:getGCMask.)
				Note: multiple types may have the same value of GCData,
				including when TFlagGCMaskOnDemand is set. The types will, of course,
				have the same pointer layout (but not necessarily the same size).
			
				// hash of type; avoids computation in hash tables
			
				// enumeration for C
			
				// number of (prefix) bytes in the type that can contain pointers
			
				// type for pointer to this type, may be zero
			Type.Size_ uintptr
			
				// string form
			
				// extra type information flags
		
			
				Align returns the alignment of data with type t.
			
				ArrayType returns t cast to a *ArrayType, or nil if its tag does not match.
			
				ChanDir returns the direction of t if t is a channel type, otherwise InvalidDir (0).
			( rtype) Common() *abi.Type
			
				Elem returns the element type for t if t is an array, channel, map, pointer, or slice, otherwise nil.
			( rtype) ExportedMethods() []abi.Method
			( rtype) FieldAlign() int
			
				FuncType returns t cast to a *FuncType, or nil if its tag does not match.
			( rtype) GcSlice(begin, end uintptr) []byte
			( rtype) HasName() bool
			
				IfaceIndir reports whether t is stored indirectly in an interface value.
			
				InterfaceType returns t cast to a *InterfaceType, or nil if its tag does not match.
			
				isDirectIface reports whether t is stored directly in an interface value.
			( rtype) Key() *abi.Type
			( rtype) Kind() abi.Kind
			
				Len returns the length of t if t is an array type, otherwise 0
			
				MapType returns t cast to a *OldMapType or *SwissMapType, or nil if its tag does not match.
			( rtype) NumMethod() int
			
				Pointers reports whether t contains pointers.
			
				Size returns the size of data with type t.
			
				StructType returns t cast to a *StructType, or nil if its tag does not match.
			
				Uncommon returns a pointer to T's "uncommon" data if there is any, otherwise nil
			
			( rtype) name() string
			( rtype) nameOff(off nameOff) name
			
				pkgpath returns the path of the package where t was defined, if
				available. This is not the same as the reflect package's PkgPath
				method, in that it returns the package path for struct and interface
				types, not just named types.
			( rtype) string() string
			( rtype) textOff(off textOff) unsafe.Pointer
			( rtype) typeOff(off typeOff) *_type
			( rtype) uncommon() *uncommontype
		
			
			func toRType(t *abi.Type) rtype
	
		A runtimeSelect is a single case passed to rselect.
		This must match ../reflect/value.go:/runtimeSelect
		
			
			
				// channel
			dir selectDir
			
				// channel type (not used here)
			
				// ptr to data (SendDir) or ptr to receive buffer (RecvDir)
		
			
			func reflect_rselect(cases []runtimeSelect) (int, bool)
	
		A rwmutex is a reader/writer mutual exclusion lock.
		The lock can be held by an arbitrary number of readers or a single writer.
		This is a variant of sync.RWMutex, for the runtime package.
		Like mutex, rwmutex blocks the calling M.
		It does not interact with the goroutine scheduler.
		
			
			
				// protects readers, readerPass, writer
			
				// semantic lock rank for read locking
			
				// number of pending readers
			
				// number of pending readers to skip readers list
			
				// number of departing readers
			
				// list of pending readers
			
				// serializes writers
			
				// pending writer waiting for completing readers
		
			
			
				Lock ranking an rwmutex has two aspects:
				
				Semantic ranking: this rwmutex represents some higher level lock that
				protects some resource (e.g., allocmLock protects creation of new Ms). The
				read and write locks of that resource need to be represented in the lock
				rank.
				
				Internal ranking: as an implementation detail, rwmutex uses two mutexes:
				rLock and wLock. These have lock order requirements: wLock must be locked
				before rLock. This also needs to be represented in the lock rank.
				
				Semantic ranking is represented by acquiring readRank during read lock and
				writeRank during write lock.
				
				wLock is held for the duration of a write lock, so it uses writeRank
				directly, both for semantic and internal ranking. rLock is only held
				temporarily inside the rlock/lock methods, so it uses readRankInternal to
				represent internal ranking. Semantic ranking is represented by a separate
				acquire of readRank for the duration of a read lock.
				
				The lock ranking must document this ordering:
				  - readRankInternal is a leaf lock.
				  - readRank is taken before readRankInternal.
				  - writeRank is taken before readRankInternal.
				  - readRank is placed in the lock order wherever a read lock of this rwmutex
				    belongs.
				  - writeRank is placed in the lock order wherever a write lock of this
				    rwmutex belongs.
			
				lock locks rw for writing.
			
				rlock locks rw for reading.
			
				runlock undoes a single rlock call on rw.
			
				unlock unlocks rw for writing.
		
			
			  var allocmLock
			  var execLock
	
		savedOpenDeferState tracks the extra state from _panic that's
		necessary for deferreturn to pick up where gopanic left off,
		without needing to unwind the stack.
		
			
			deferBitsOffset uintptr
			retpc uintptr
			slotsOffset uintptr
	
		Select case descriptor.
		Known to compiler.
		Changes here must also be made in src/cmd/compile/internal/walk/select.go's scasetype.
		
			
			
				// chan
			
				// data element
		
			
			func selectgo(cas0 *scase, order0 *uint16, pc0 *uintptr, nsends, nrecvs int, block bool) (int, bool)
			func sellock(scases []scase, lockorder []uint16)
			func selunlock(scases []scase, lockorder []uint16)
	
		scavChunkData tracks information about a palloc chunk for
		scavenging. It packs well into 64 bits.
		
		The zero value always represents a valid newly-grown chunk.
		
			
			
				gen is the generation counter from a scavengeIndex from the
				last time this scavChunkData was updated.
			
				inUse indicates how many pages in this chunk are currently
				allocated.
				
				Only the first 10 bits are used.
			
				lastInUse indicates how many pages in this chunk were allocated
				when we transitioned from gen-1 to gen.
				
				Only the first 10 bits are used.
			
				scavChunkFlags represents additional flags
				
				Note: only 6 bits are available.
		
			
			
				alloc updates sc given that npages were allocated in the corresponding chunk.
			
				free updates sc given that npages was freed in the corresponding chunk.
			
				isEmpty returns true if the hasFree flag is unset.
			
				pack returns sc packed into a uint64.
			
				setEmpty clears the hasFree flag.
			
				setNonEmpty sets the hasFree flag.
			
				shouldScavenge returns true if the corresponding chunk should be interrogated
				by the scavenger.
		
			
			func unpackScavChunkData(sc uint64) scavChunkData
	
		scavChunkFlags is a set of bit-flags for the scavenger for each palloc chunk.
		
			
			
				isEmpty returns true if the hasFree flag is unset.
			
				setEmpty clears the hasFree flag.
			
				setNonEmpty sets the hasFree flag.
		
			
			const scavChunkHasFree
	
		scavengeIndex is a structure for efficiently managing which pageAlloc chunks have
		memory available to scavenge.
		
			
			
				chunks is a scavChunkData-per-chunk structure that indicates the presence of pages
				available for scavenging. Updates to the index are serialized by the pageAlloc lock.
				
				It tracks chunk occupancy and a generation counter per chunk. If a chunk's occupancy
				never exceeds pallocChunkDensePages over the course of a single GC cycle, the chunk
				becomes eligible for scavenging on the next cycle. If a chunk ever hits this density
				threshold it immediately becomes unavailable for scavenging in the current cycle as
				well as the next.
				
				[min, max) represents the range of chunks that is safe to access (i.e. will not cause
				a fault). As an optimization minHeapIdx represents the true minimum chunk that has been
				mapped, since min is likely rounded down to include the system page containing minHeapIdx.
				
				For a chunk size of 4 MiB this structure will only use 2 MiB for a 1 TiB contiguous heap.
			
				freeHWM is the highest address (in offset address space) that was freed
				this generation.
			
				Generation counter. Updated by nextGen at the end of each mark phase.
			max atomic.Uintptr
			min atomic.Uintptr
			minHeapIdx atomic.Uintptr
			
				searchAddr* is the maximum address (in the offset address space, so we have a linear
				view of the address space; see mranges.go:offAddr) containing memory available to
				scavenge. It is a hint to the find operation to avoid O(n^2) behavior in repeated lookups.
				
				searchAddr* is always inclusive and should be the base address of the highest runtime
				page available for scavenging.
				
				searchAddrForce is managed by find and free.
				searchAddrBg is managed by find and nextGen.
				
				Normally, find monotonically decreases searchAddr* as it finds no more free pages to
				scavenge. However, mark, when marking a new chunk at an index greater than the current
				searchAddr, sets searchAddr to the *negative* index into chunks of that page. The trick here
				is that concurrent calls to find will fail to monotonically decrease searchAddr*, and so they
				won't barge over new memory becoming available to scavenge. Furthermore, this ensures
				that some future caller of find *must* observe the new high index. That caller
				(or any other racing with it), then makes searchAddr positive before continuing, bringing
				us back to our monotonically decreasing steady-state.
				
				A pageAlloc lock serializes updates between min, max, and searchAddr, so abs(searchAddr)
				is always guaranteed to be >= min and < max (converted to heap addresses).
				
				searchAddrBg is increased only on each new generation and is mainly used by the
				background scavenger and heap-growth scavenging. searchAddrForce is increased continuously
				as memory gets freed and is mainly used by eager memory reclaim such as debug.FreeOSMemory
				and scavenging to maintain the memory limit.
			searchAddrForce atomicOffAddr
			
				test indicates whether or not we're in a test.
		
			
			
				alloc updates metadata for chunk at index ci with the fact that
				an allocation of npages occurred. It also eagerly attempts to collapse
				the chunk's memory into hugepage if the chunk has become sufficiently
				dense and we're not allocating the whole chunk at once (which suggests
				the allocation is part of a bigger one and it's probably not worth
				eagerly collapsing).
				
				alloc may only run concurrently with find.
			
				find returns the highest chunk index that may contain pages available to scavenge.
				It also returns an offset to start searching in the highest chunk.
			
				free updates metadata for chunk at index ci with the fact that
				a free of npages occurred.
				
				free may only run concurrently with find.
			
				sysGrow updates the index's backing store in response to a heap growth.
				
				Returns the amount of memory added to sysStat.
			
				init initializes the scavengeIndex.
				
				Returns the amount added to sysStat.
			
				nextGen moves the scavenger forward one generation. Must be called
				once per GC cycle, but may be called more often to force more memory
				to be released.
				
				nextGen may only run concurrently with find.
			
				setEmpty marks that the scavenger has finished looking at ci
				for now to prevent the scavenger from getting stuck looking
				at the same chunk.
				
				setEmpty may only run concurrently with find.
			
				sysGrow increases the index's backing store in response to a heap growth.
				
				Returns the amount of memory added to sysStat.
			
				sysInit initializes the scavengeIndex' chunks array.
				
				Returns the amount of memory added to sysStat.
	
		
			
			
				controllerCooldown is the time left in nanoseconds during which we avoid
				using the controller and we hold sleepRatio at a conservative
				value. Used if the controller's assumptions fail to hold.
			
				g is the goroutine the scavenger is bound to.
			
				gomaxprocs returns the current value of gomaxprocs. Stub for testing.
				
				If this is nil, it is populated with the real thing in init.
			
				lock protects all fields below.
			
				parked is whether or not the scavenger is parked.
			
				printControllerReset instructs printScavTrace to signal that
				the controller was reset.
			
				scavenge is a function that scavenges n bytes of memory.
				Returns how many bytes of memory it actually scavenged, as
				well as the time it took in nanoseconds. Usually mheap.pages.scavenge
				with nanotime called around it, but stubbed out for testing.
				Like mheap.pages.scavenge, if it scavenges less than n bytes of
				memory, the caller may assume the heap is exhausted of scavengable
				memory for now.
				
				If this is nil, it is populated with the real thing in init.
			
				shouldStop is a callback called in the work loop and provides a
				point that can force the scavenger to stop early, for example because
				the scavenge policy dictates too much has been scavenged already.
				
				If this is nil, it is populated with the real thing in init.
			
				sleepController controls sleepRatio.
				
				See sleepRatio for more details.
			
				sleepRatio is the ratio of time spent doing scavenging work to
				time spent sleeping. This is used to decide how long the scavenger
				should sleep for in between batches of work. It is set by
				critSleepController in order to maintain a CPU overhead of
				targetCPUFraction.
				
				Lower means more sleep, higher means more aggressive scavenging.
			
				sleepStub is a stub used for testing to avoid actually having
				the scavenger sleep.
				
				Unlike the other stubs, this is not populated if left nil
				Instead, it is called when non-nil because any valid implementation
				of this function basically requires closing over this scavenger
				state, and allocating a closure is not allowed in the runtime as
				a matter of policy.
			
				sysmonWake signals to sysmon that it should wake the scavenger.
			
				targetCPUFraction is the target CPU overhead for the scavenger.
			
				timer is the timer used for the scavenger to sleep.
		
			
			
				controllerFailed indicates that the scavenger's scheduling
				controller failed.
			
				init initializes a scavenger state and wires to the current G.
				
				Must be called from a regular goroutine that can allocate.
			
				park parks the scavenger goroutine.
			
				ready signals to sysmon that the scavenger should be awoken.
			
				run is the body of the main scavenging loop.
				
				Returns the number of bytes released and the estimated time spent
				releasing those bytes.
				
				Must be run on the scavenger goroutine.
			
				sleep puts the scavenger to sleep based on the amount of time that it worked
				in nanoseconds.
				
				Note that this function should only be called by the scavenger.
				
				The scavenger may be woken up earlier by a pacing change, and it may not go
				to sleep at all if there's a pending pacing change.
			
				wake immediately unparks the scavenger if necessary.
				
				Safe to run without a P.
		
			
			  var scavenger
	
		
			
			
				Central pool of available defer structs.
			deferpool *_defer
			
				disable controls selective disabling of the scheduler.
				
				Use schedEnableUser to control this.
				
				disable is protected by sched.lock.
			
				freem is the list of m's waiting to be freed when their
				m.exited is set. Linked through m.freelink.
			
				Global cache of dead G's.
			
				// gc is waiting to run
			goidgen atomic.Uint64
			
				idleTime is the total CPU time Ps have "spent" idle.
				
				Reset on each GC cycle.
			
				// time of last network poll, 0 if currently polling
			lock mutex
			
				// maximum number of m's allowed (or die)
			
				// idle m's waiting for work
			
				// number of m's that have been created and next M ID
			
				// See "Delicate dance" comment in proc.go. Boolean. Must hold sched.lock to set to 1.
			
				// number of system goroutines
			
				// cumulative number of freed m's
			
				// number of idle m's waiting for work
			
				// number of locked m's waiting for work
			
				// See "Worker thread parking/unparking" comment in proc.go.
			
				// number of system m's not counted for deadlock
			npidle atomic.Int32
			
				// idle p's
			
				// time to which current poll is sleeping
			
				// nanotime() of last change to gomaxprocs
			
				// cpu profiling rate
			
				Global runnable queue.
			runqsize int32
			
				safePointFn should be called on each P at the next GC
				safepoint if p.runSafePointFn is set.
			safePointNote note
			safePointWait int32
			stopnote note
			stopwait int32
			
				stwStoppingTimeGC/Other are distributions of stop-the-world stopping
				latencies, defined as the time taken by stopTheWorldWithSema to get
				all Ps to stop. stwStoppingTimeGC covers all GC-related STWs,
				stwStoppingTimeOther covers the others.
			stwStoppingTimeOther timeHistogram
			
				stwTotalTimeGC/Other are distributions of stop-the-world total
				latencies, defined as the total time from stopTheWorldWithSema to
				startTheWorldWithSema. This is a superset of
				stwStoppingTimeGC/Other. stwTotalTimeGC covers all GC-related STWs,
				stwTotalTimeOther covers the others.
			stwTotalTimeOther timeHistogram
			sudogcache *sudog
			
				Central cache of sudog structs.
			
				sysmonlock protects sysmon's actions on the runtime.
				
				Acquire and hold this mutex to block sysmon from interacting
				with the rest of the runtime.
			sysmonnote note
			sysmonwait atomic.Bool
			
				timeToRun is a distribution of scheduling latencies, defined
				as the sum of time a G spends in the _Grunnable state before
				it transitions to _Grunning.
			
				totalMutexWaitTime is the sum of time goroutines have spent in _Gwaiting
				with a waitreason of the form waitReasonSync{RW,}Mutex{R,}Lock.
			
				totalRuntimeLockWaitTime (plus the value of lockWaitTime on each M in
				allm) is the sum of time goroutines have spent in _Grunnable and with an
				M, but waiting for locks within the runtime. This field stores the value
				for Ms that have exited.
			
				// ∫gomaxprocs dt up to procresizetime
		
			
			  var sched
	
		These values must match ../reflect/value.go:/SelectDir.
		
			
			const selectDefault
			const selectRecv
			const selectSend
	
		
			
			func semacquire1(addr *uint32, lifo bool, profile semaProfileFlags, skipframes int, reason waitReason)
		
			
			const semaBlockProfile
			const semaMutexProfile
	
		A semaRoot holds a balanced tree of sudog with distinct addresses (s.elem).
		Each of those sudog may in turn point (through s.waitlink) to a list
		of other sudogs waiting on the same address.
		The operations on the inner lists of sudogs with the same address
		are all O(1). The scanning of the top-level semaRoot list is O(log n),
		where n is the number of distinct addresses with goroutines blocked
		on them that hash to the given semaRoot.
		See golang.org/issue/17953 for a program that worked badly
		before we introduced the second level of list, and
		BenchmarkSemTable/OneAddrCollision/* for a benchmark that exercises this.
		
			
			lock mutex
			
				// Number of waiters. Read w/o the lock.
			
				// root of balanced tree of unique waiters.
		
			
			
				dequeue searches for and finds the first goroutine
				in semaRoot blocked on addr.
				If the sudog was being profiled, dequeue returns the time
				at which it was woken up as now. Otherwise now is 0.
				If there are additional entries in the wait list, dequeue
				returns tailtime set to the last entry's acquiretime.
				Otherwise tailtime is found.acquiretime.
			
				queue adds s to the blocked goroutines in semaRoot.
			
				rotateLeft rotates the tree rooted at node x.
				turning (x a (y b c)) into (y (x a b) c).
			
				rotateRight rotates the tree rooted at node y.
				turning (y (x a b) c) into (x a (y b c)).
	
		
			
			sa_flags uint64
			sa_handler uintptr
			sa_mask uint64
			sa_restorer uintptr
		
			
			func callCgoSigaction(sig uintptr, new, old *sigactiont) int32
			func rt_sigaction(sig uintptr, new, old *sigactiont, size uintptr) int32
			func sigaction(sig uint32, new, old *sigactiont)
			func sysSigaction(sig uint32, new, old *sigactiont)
	
		
			
			__pad0 uint16
			__reserved1 [8]uint64
			cr2 uint64
			cs uint16
			eflags uint64
			err uint64
			fpstate *fpstate1
			fs uint16
			gs uint16
			oldmask uint64
			r10 uint64
			r11 uint64
			r12 uint64
			r13 uint64
			r14 uint64
			r15 uint64
			r8 uint64
			r9 uint64
			rax uint64
			rbp uint64
			rbx uint64
			rcx uint64
			rdi uint64
			rdx uint64
			rip uint64
			rsi uint64
			rsp uint64
			trapno uint64
	
		
			
			ctxt unsafe.Pointer
			info *siginfo
		
			
			(*sigctxt) cs() uint64
			(*sigctxt) fault() uintptr
			(*sigctxt) fixsigcode(sig uint32)
			(*sigctxt) fs() uint64
			(*sigctxt) gs() uint64
			
				preparePanic sets up the stack to look like a call to sigpanic.
			(*sigctxt) pushCall(targetPC, resumePC uintptr)
			(*sigctxt) r10() uint64
			(*sigctxt) r11() uint64
			(*sigctxt) r12() uint64
			(*sigctxt) r13() uint64
			(*sigctxt) r14() uint64
			(*sigctxt) r15() uint64
			(*sigctxt) r8() uint64
			(*sigctxt) r9() uint64
			(*sigctxt) rax() uint64
			(*sigctxt) rbp() uint64
			(*sigctxt) rbx() uint64
			(*sigctxt) rcx() uint64
			(*sigctxt) rdi() uint64
			(*sigctxt) rdx() uint64
			(*sigctxt) regs() *sigcontext
			(*sigctxt) rflags() uint64
			(*sigctxt) rip() uint64
			(*sigctxt) rsi() uint64
			(*sigctxt) rsp() uint64
			(*sigctxt) set_rip(x uint64)
			(*sigctxt) set_rsp(x uint64)
			(*sigctxt) set_sigaddr(x uint64)
			(*sigctxt) set_sigcode(x uint64)
			(*sigctxt) setsigpc(x uint64)
			
				sigFromSeccomp reports whether the signal was sent from seccomp.
			
				sigFromUser reports whether the signal was sent because of a call
				to kill or tgkill.
			(*sigctxt) sigaddr() uint64
			(*sigctxt) sigcode() uint64
			(*sigctxt) siglr() uintptr
			(*sigctxt) sigpc() uintptr
			(*sigctxt) sigsp() uintptr
		
			
			func badsignal(sig uintptr, c *sigctxt)
			func doSigPreempt(gp *g, ctxt *sigctxt)
			func dumpregs(c *sigctxt)
			func fatalsignal(sig uint32, c *sigctxt, gp *g, mp *m) *g
			func raisebadsignal(sig uint32, c *sigctxt)
			func sigFetchG(c *sigctxt) *g
			func validSIGPROF(mp *m, c *sigctxt) bool
	
		
			
			sigeventFields sigeventFields
			sigeventFields.notify int32
			
				below here is a union; sigev_notify_thread_id is the only field we use
			sigeventFields.signo int32
			sigeventFields.value uintptr
		
			
			func timer_create(clockid int32, sevp *sigevent, timerid *int32) int32
	
		
			
			notify int32
			
				below here is a union; sigev_notify_thread_id is the only field we use
			signo int32
			value uintptr
	
		
			
			siginfoFields siginfoFields
			
				below here is a union; si_addr is the only field we use
			siginfoFields.si_code int32
			siginfoFields.si_errno int32
			siginfoFields.si_signo int32
		
			
			func sigfwd(fn uintptr, sig uint32, info *siginfo, ctx unsafe.Pointer)
			func sigfwdgo(sig uint32, info *siginfo, ctx unsafe.Pointer) bool
			func sighandler(sig uint32, info *siginfo, ctxt unsafe.Pointer, gp *g)
			func sigprofNonGo(sig uint32, info *siginfo, ctx unsafe.Pointer)
			func sigtrampgo(sig uint32, info *siginfo, ctx unsafe.Pointer)
	
		It's hard to tease out exactly how big a Sigset is, but
		rt_sigprocmask crashes if we get it wrong, so if binaries
		are running, this is right.
		
			
			func msigrestore(sigmask sigset)
			func rtsigprocmask(how int32, new, old *sigset, size int32)
			func sigaddset(mask *sigset, i int)
			func sigdelset(mask *sigset, i int)
			func sigprocmask(how int32, new, old *sigset)
			func sigsave(p *sigset)
		
			
			  var initSigmask
			  var sigset_all
			  var sigsetAllExiting
	
		sigTabT is the type of an entry in the global sigtable array.
		sigtable is inherently system dependent, and appears in OS-specific files,
		but sigTabT is the same for all Unixy systems.
		The sigtable array is indexed by a system signal number to get the flags
		and printable name of each signal.
		
			
			flags int32
			name string
	
		
			
			array unsafe.Pointer
			cap int
			len int
		
			
			func growslice(oldPtr unsafe.Pointer, newLen, oldCap, num int, et *_type) slice
			func reflect_growslice(et *_type, old slice, num int) slice
		
			
			func reflect_growslice(et *_type, old slice, num int) slice
			func reflect_typedslicecopy(elemType *_type, dst, src slice) int
	
		The specialized convTx routines need a type descriptor to use when calling mallocgc.
		We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
		However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
		so we use named types here.
		We then construct interface values of these types,
		and then extract the type word to use as needed.
	
		spanAllocType represents the type of allocation to make, or
		the type of allocation to be freed.
		
			
			
				manual returns true if the span allocation is manually managed.
		
			
			const spanAllocHeap
			const spanAllocPtrScalarBits
			const spanAllocStack
			const spanAllocWorkBuf
	
		A spanClass represents the size class and noscan-ness of a span.
		
		Each size class has a noscan spanClass and a scan spanClass. The
		noscan spanClass contains only noscan objects, which do not contain
		pointers and thus do not need to be scanned by the garbage
		collector.
		
			
			( spanClass) noscan() bool
			( spanClass) sizeclass() int8
		
			
			func makeSpanClass(sizeclass uint8, noscan bool) spanClass
		
			
			const tinySpanClass
	
		A spanSet is a set of *mspans.
		
		spanSet is safe for concurrent push and pop operations.
		
			
			
				index is the head and tail of the spanSet in a single field.
				The head and the tail both represent an index into the logical
				concatenation of all blocks, with the head always behind or
				equal to the tail (indicating an empty set). This field is
				always accessed atomically.
				
				The head and the tail are only 32 bits wide, which means we
				can only support up to 2^32 pushes before a reset. If every
				span in the heap were stored in this set, and each span were
				the minimum size (1 runtime page, 8 KiB), then roughly the
				smallest heap which would be unrepresentable is 32 TiB in size.
			
				// *[N]atomic.Pointer[spanSetBlock]
			
				// Spine array cap, accessed under spineLock
			
				// Spine array length
			spineLock mutex
		
			
			
				pop removes and returns a span from buffer b, or nil if b is empty.
				pop is safe to call concurrently with other pop and push operations.
			
				push adds span s to buffer b. push is safe to call concurrently
				with other push and pop operations.
			
				reset resets a spanSet which is empty. It will also clean up
				any left over blocks.
				
				Throws if the buf is not empty.
				
				reset may not be called concurrently with any other operations
				on the span set.
	
		
			
			
				Free spanSetBlocks are managed via a lock-free stack.
			lfnode.next uint64
			lfnode.pushcnt uintptr
			
				popped is the number of pop operations that have occurred on
				this block. This number is used to help determine when a block
				may be safely recycled.
			
				spans is the set of spans in this block.
	
		spanSetBlockAlloc represents a concurrent pool of spanSetBlocks.
		
			
			stack lfstack
		
			
			
				alloc tries to grab a spanSetBlock out of the pool, and if it fails
				persistentallocs a new one and returns it.
			
				free returns a spanSetBlock back to the pool.
		
			
			  var spanSetBlockPool
	
		spanSetSpinePointer represents a pointer to a contiguous block of atomic.Pointer[spanSetBlock].
		
			
			p unsafe.Pointer
		
			
			
				lookup returns &s[idx].
	
		
			
			
				// kind of special
			
				// linked list in span
			
				// span offset of object
		
			
			func removespecial(p unsafe.Pointer, kind uint8) *special
		
			
			func addspecial(p unsafe.Pointer, s *special, force bool) bool
			func freeSpecial(s *special, p unsafe.Pointer, size uintptr)
	
		The described object has a cleanup set for it.
		
			
			fn *funcval
			
				Globally unique ID for the cleanup, obtained from mheap_.cleanupID.
			special special
	
		The described object has a finalizer set for it.
		
		specialfinalizer is allocated from non-GC'd memory, so any heap
		pointers must be specially handled.
		
			
			
				// May be a heap pointer, but always live.
			
				// May be a heap pointer.
			nret uintptr
			
				// May be a heap pointer, but always live.
			special special
	
		specialPinCounter tracks whether an object is pinned multiple times.
		
			
			counter uintptr
			special special
	
		specialReachable tracks whether an object is reachable on the next
		GC cycle. This is used by testing.
		
			
			done bool
			reachable bool
			special special
	
		specialsIter helps iterate over specials lists.
		
			
			pprev **special
			s *special
		
			
			(*specialsIter) next()
			
				unlinkAndNext removes the current special from the list and moves
				the iterator to the next special. It returns the unlinked special.
			(*specialsIter) valid() bool
		
			
			func newSpecialsIter(span *mspan) specialsIter
	
		The described object has a weak pointer.
		
		Weak pointers in the GC have the following invariants:
		
		  - Strong-to-weak conversions must ensure the strong pointer
		    remains live until the weak handle is installed. This ensures
		    that creating a weak pointer cannot fail.
		
		  - Weak-to-strong conversions require the weakly-referenced
		    object to be swept before the conversion may proceed. This
		    ensures that weak-to-strong conversions cannot resurrect
		    dead objects by sweeping them before that happens.
		
		  - Weak handles are unique and canonical for each byte offset into
		    an object that a strong pointer may point to, until an object
		    becomes unreachable.
		
		  - Weak handles contain nil as soon as an object becomes unreachable
		    the first time, before a finalizer makes it reachable again. New
		    weak handles created after resurrection are newly unique.
		
		specialWeakHandle is allocated from non-GC'd memory, so any heap
		pointers must be specially handled.
		
			
			
				handle is a reference to the actual weak pointer.
				It is always heap-allocated and must be explicitly kept
				live so long as this special exists.
			special special
	
		A srcFunc represents a logical function in the source code. This may
		correspond to an actual symbol in the binary text, or it may correspond to a
		source function that has been inlined.
		
			
			datap *moduledata
			funcID abi.FuncID
			nameOff int32
			startLine int32
		
			
			
				name should be an internal detail,
				but widely used packages access it using linkname.
				Notable members of the hall of shame include:
				  - github.com/phuslu/log
				
				Do not remove or change the type signature.
				See go.dev/issue/67401.
		
			
			func badSrcFunc(*inlineUnwinder, inlineFrame) srcFunc
		
			
			func badSrcFuncName(srcFunc) string
			func showframe(sf srcFunc, gp *g, firstFrame bool, calleeID abi.FuncID) bool
			func showfuncinfo(sf srcFunc, firstFrame bool, calleeID abi.FuncID) bool
	
		Stack describes a Go execution stack.
		The bounds of the stack are exactly [lo, hi),
		with no implicit data structures on either side.
		
			
			hi uintptr
			lo uintptr
		
			
			func stackalloc(n uint32) stack
		
			
			func fillstack(stk stack, b byte)
			func findsghi(gp *g, stk stack) uintptr
			func signalstack(s *stack)
			func stackfree(stk stack)
			func tracebackHexdump(stk stack, frame *stkframe, bad uintptr)
	
		
			
			
				// linked list of free stacks
			
				// total size of stacks in list
	
		
			
			
				// bitmaps, each starting on a byte boundary
			
				// number of bitmaps
			
				// number of bits in each bitmap
		
			
			func stackmapdata(stkmap *stackmap, n int32) bitvector
	
		A stackObject represents a variable on the stack that has had
		its address taken.
		
			
			
				// objects with lower addresses
			
				// offset above stack.lo
			
				// info of the object (for ptr/nonptr bits). nil if object has been scanned.
			
				// objects with higher addresses
			
				// size of object
		
			
			
				obj.r = r, but with no write barrier.
		
			
			func binarySearchTree(x *stackObjectBuf, idx int, n int) (root *stackObject, restBuf *stackObjectBuf, restIdx int)
	
		Buffer for stack objects found on a goroutine stack.
		Must be smaller than or equal to workbuf.
		
			
			obj [63]stackObject
			stackObjectBufHdr stackObjectBufHdr
			stackObjectBufHdr.next *stackObjectBuf
			stackObjectBufHdr.workbufhdr workbufhdr
			stackObjectBufHdr.workbufhdr.nobj int
			
				// must be first
		
			
			func binarySearchTree(x *stackObjectBuf, idx int, n int) (root *stackObject, restBuf *stackObjectBuf, restIdx int)
		
			
			func binarySearchTree(x *stackObjectBuf, idx int, n int) (root *stackObject, restBuf *stackObjectBuf, restIdx int)
	
		
			
			next *stackObjectBuf
			workbufhdr workbufhdr
			workbufhdr.nobj int
			
				// must be first
	
		A stackObjectRecord is generated by the compiler for each stack object in a stack frame.
		This record must match the generator code in cmd/compile/internal/liveness/plive.go:emitStackObjects.
		
			
			
				// offset to gcdata from moduledata.rodata
			
				offset in frame
				if negative, offset from varp
				if non-negative, offset from argp
			ptrBytes int32
			size int32
		
			
			
				gcdata returns the number of bytes that contain pointers, and
				a ptr/nonptr bitmask covering those bytes.
				Note that this bitmask might be larger than internal/abi.MaxPtrmaskBytes.
	
		A stackScanState keeps track of the state used during the GC walk
		of a goroutine.
		
			
			
				buf contains the set of possible pointers to stack objects.
				Organized as a LIFO linked list of buffers.
				All buffers except possibly the head buffer are full.
			
				cbuf contains conservative pointers to stack objects. If
				all pointers to a stack object are obtained via
				conservative scanning, then the stack object may be dead
				and may contain dead pointers, so it must be scanned
				defensively.
			
				conservative indicates that the next frame must be scanned conservatively.
				This applies only to the innermost frame at an async safe-point.
			
				// keep around one free buffer for allocation hysteresis
			
				list of stack objects
				Objects are in increasing address order.
			nobjs int
			
				root of binary tree for fast object lookup by address
				Initialized by buildIndex.
			
				stack limits
			tail *stackObjectBuf
		
			
			
				addObject adds a stack object at addr of type typ to the set of stack objects.
			
				buildIndex initializes s.root to a binary search tree.
				It should be called after all addObject calls but before
				any call of findObject.
			
				findObject returns the stack object containing address a, if any.
				Must have called buildIndex previously.
			
				Remove and return a potential pointer to a stack object.
				Returns 0 if there are no more pointers available.
				
				This prefers non-conservative pointers so we scan stack objects
				precisely if there are any non-conservative pointers to them.
			
				Add p as a potential pointer to a stack object.
				p must be a stack address.
		
			
			func scanblock(b0, n0 uintptr, ptrmask *uint8, gcw *gcWork, stk *stackScanState)
			func scanConservative(b, n uintptr, ptrmask *uint8, gcw *gcWork, state *stackScanState)
			func scanframeworker(frame *stkframe, state *stackScanState, gcw *gcWork)
	
		
			
			pad_cgo_0 [4]byte
			ss_flags int32
			ss_size uintptr
			ss_sp *byte
		
			
			func setGsignalStack(st *stackt, old *gsignalStack)
			func setSignalstackSP(s *stackt, sp uintptr)
			func sigaltstack(new, old *stackt)
	
		Buffer for pointers found during stack tracing.
		Must be smaller than or equal to workbuf.
		
			
			obj [252]uintptr
			stackWorkBufHdr stackWorkBufHdr
			
				// linked list of workbufs
			stackWorkBufHdr.workbufhdr workbufhdr
			stackWorkBufHdr.workbufhdr.nobj int
			
				// must be first
	
		Header declaration must come after the buf declaration above, because of issue #14620.
		
			
			
				// linked list of workbufs
			workbufhdr workbufhdr
			workbufhdr.nobj int
			
				// must be first
	
		statAggregate is the main driver of the metrics implementation.
		
		It contains multiple aggregates of runtime statistics, as well
		as a set of these aggregates that it has populated. The aggregates
		are populated lazily by its ensure method.
		
			
			cpuStats cpuStatsAggregate
			ensured statDepSet
			gcStats gcStatsAggregate
			heapStats heapStatsAggregate
			sysStats sysStatsAggregate
		
			
			
				ensure populates statistics aggregates determined by deps if they
				haven't yet been populated.
		
			
			func compute0(_ *statAggregate, out *metricValue)
		
			
			  var agg
	
		statDep is a dependency on a group of statistics
		that a metric might have.
		
			
			func makeStatDepSet(deps ...statDep) statDepSet
		
			
			const cpuStatsDep
			const gcStatsDep
			const heapStatsDep
			const numStatsDeps
			const sysStatsDep
	
		statDepSet represents a set of statDeps.
		
		Under the hood, it's a bitmap.
		
			
			
				difference returns set difference of s from b as a new set.
			
				empty returns true if there are no dependencies in the set.
			
				has returns true if the set contains a given statDep.
			
				union returns the union of the two sets as a new set.
		
			
			func makeStatDepSet(deps ...statDep) statDepSet
	
		A stkframe holds information about a single physical stack frame.
		
			
			
				// pointer to function arguments
			
				continpc is the PC where execution will continue in fn, or
				0 if execution will not continue in this frame.
				
				This is usually the same as pc, unless this frame "called"
				sigpanic, in which case it's either the address of
				deferreturn or 0 if this frame will never execute again.
				
				This is the PC to use to look up GC liveness for this frame.
			
				fn is the function being run in this frame. If there is
				inlining, this is the outermost function.
			
				// stack pointer at caller aka frame pointer
			
				// program counter at caller aka link register
			
				pc is the program counter within fn.
				
				The meaning of this is subtle:
				
				- Typically, this frame performed a regular function call
				  and this is the return PC (just after the CALL
				  instruction). In this case, pc-1 reflects the CALL
				  instruction itself and is the correct source of symbolic
				  information.
				
				- If this frame "called" sigpanic, then pc is the
				  instruction that panicked, and pc is the correct address
				  to use for symbolic information.
				
				- If this is the innermost frame, then PC is where
				  execution will continue, but it may not be the
				  instruction following a CALL. This may be from
				  cooperative preemption, in which case this is the
				  instruction after the call to morestack. Or this may be
				  from a signal or an un-started goroutine, in which case
				  PC could be any instruction, including the first
				  instruction in a function. Conventionally, we use pc-1
				  for symbolic information, unless pc == fn.entry(), in
				  which case we use pc.
			
				// stack pointer at pc
			
				// top of local variables
		
			
			
				argBytes returns the argument frame size for a call to frame.fn.
			
				argMapInternal is used internally by stkframe to fetch special
				argument maps.
				
				argMap.n is always populated with the size of the argument map.
				
				argMap.bytedata is only populated for dynamic argument maps (used
				by reflect). If the caller requires the argument map, it should use
				this if non-nil, and otherwise fetch the argument map using the
				current PC.
				
				hasReflectStackObj indicates that this frame also has a reflect
				function stack object, which the caller must synthesize.
			
				getStackMap returns the locals and arguments live pointer maps, and
				stack object list for frame.
		
			
			func adjustframe(frame *stkframe, adjinfo *adjustinfo)
			func dumpframe(s *stkframe, child *childInfo)
			func scanframeworker(frame *stkframe, state *stackScanState, gcw *gcWork)
			func tracebackHexdump(stk stack, frame *stkframe, bad uintptr)
	
		
			( stringer) String() string
		
			*runtime/debug.BuildInfo
			*bytes.Buffer
			 crypto.Hash
			 crypto/tls.ClientAuthType
			 crypto/tls.CurveID
			 crypto/tls.QUICEncryptionLevel
			 crypto/tls.SignatureScheme
			 crypto/x509.OID
			 crypto/x509.PublicKeyAlgorithm
			 crypto/x509.SignatureAlgorithm
			 crypto/x509/pkix.Name
			 crypto/x509/pkix.RDNSequence
			 encoding/asn1.ObjectIdentifier
			 encoding/binary.AppendByteOrder (interface)
			 encoding/binary.ByteOrder (interface)
			 encoding/json.Delim
			 encoding/json.Number
			*expvar.Float
			 expvar.Func
			*expvar.Int
			*expvar.Map
			*expvar.String
			 expvar.Var (interface)
			 flag.Getter (interface)
			 flag.Value (interface)
			 fmt.Stringer (interface)
			 github.com/google/go-cmp/cmp.Indirect
			 github.com/google/go-cmp/cmp.MapIndex
			 github.com/google/go-cmp/cmp.Options
			 github.com/google/go-cmp/cmp.Path
			 github.com/google/go-cmp/cmp.PathStep (interface)
			 github.com/google/go-cmp/cmp.SliceIndex
			 github.com/google/go-cmp/cmp.StructField
			 github.com/google/go-cmp/cmp.Transform
			 github.com/google/go-cmp/cmp.TypeAssertion
			 github.com/google/go-cmp/cmp/internal/diff.EditScript
			*github.com/valyala/fastjson.Object
			 github.com/valyala/fastjson.Type
			*github.com/valyala/fastjson.Value
			 go/ast.CommentMap
			*go/ast.Ident
			 go/ast.ObjKind
			*go/ast.Scope
			*go/build/constraint.AndExpr
			 go/build/constraint.Expr (interface)
			*go/build/constraint.NotExpr
			*go/build/constraint.OrExpr
			*go/build/constraint.TagExpr
			 go/constant.Kind
			 go/constant.Value (interface)
			 go/token.Position
			 go/token.Token
			*go/types.Alias
			*go/types.Array
			*go/types.Basic
			*go/types.Builtin
			*go/types.Chan
			*go/types.Const
			*go/types.Func
			*go/types.Initializer
			*go/types.Interface
			*go/types.Label
			*go/types.Map
			*go/types.MethodSet
			*go/types.Named
			*go/types.Nil
			 go/types.Object (interface)
			*go/types.Package
			*go/types.PkgName
			*go/types.Pointer
			*go/types.Scope
			*go/types.Selection
			*go/types.Signature
			*go/types.Slice
			*go/types.Struct
			*go/types.Term
			*go/types.Tuple
			 go/types.Type (interface)
			*go/types.TypeName
			*go/types.TypeParam
			*go/types.Union
			*go/types.Var
			 go.pact.im/x/phcformat.Hash
			 go.uber.org/goleak/internal/stack.Stack
			*go.uber.org/mock/gomock.Call
			 go.uber.org/mock/gomock.Matcher (interface)
			 go.uber.org/mock/gomock.StringerFunc
			 go.uber.org/zap.AtomicLevel
			*go.uber.org/zap/buffer.Buffer
			 go.uber.org/zap/zapcore.EntryCaller
			 go.uber.org/zap/zapcore.Level
			 golang.org/x/exp/apidiff.Report
			 golang.org/x/net/http2.ContinuationFrame
			 golang.org/x/net/http2.DataFrame
			 golang.org/x/net/http2.ErrCode
			 golang.org/x/net/http2.FrameHeader
			 golang.org/x/net/http2.FrameType
			 golang.org/x/net/http2.FrameWriteRequest
			 golang.org/x/net/http2.GoAwayFrame
			 golang.org/x/net/http2.HeadersFrame
			 golang.org/x/net/http2.MetaHeadersFrame
			 golang.org/x/net/http2.PingFrame
			 golang.org/x/net/http2.PriorityFrame
			 golang.org/x/net/http2.PushPromiseFrame
			 golang.org/x/net/http2.RSTStreamFrame
			 golang.org/x/net/http2.Setting
			 golang.org/x/net/http2.SettingID
			 golang.org/x/net/http2.SettingsFrame
			 golang.org/x/net/http2.UnknownFrame
			 golang.org/x/net/http2.WindowUpdateFrame
			 golang.org/x/net/http2/hpack.HeaderField
			*golang.org/x/net/idna.Profile
			*golang.org/x/net/internal/timeseries.Float
			*golang.org/x/text/unicode/bidi.Run
			 golang.org/x/tools/go/packages.LoadMode
			*golang.org/x/tools/go/packages.Package
			*golang.org/x/tools/go/types/typeutil.Map
			 golang.org/x/tools/internal/packagesinternal.PackageError
			*golang.org/x/tools/internal/pkgbits.Decoder
			 golang.org/x/tools/internal/pkgbits.SyncMarker
			 golang.org/x/tools/internal/stdlib.Kind
			 golang.org/x/tools/internal/stdlib.Version
			 golang.org/x/tools/internal/typesinternal.CallKind
			 golang.org/x/tools/internal/typesinternal.ErrorCode
			 golang.org/x/tools/internal/typesinternal.NamedOrAlias (interface)
			 golang.org/x/tools/internal/typesinternal.VarKind
			*google.golang.org/genproto/googleapis/rpc/status.Status
			 google.golang.org/grpc.Codec (interface)
			*google.golang.org/grpc/attributes.Attributes
			*google.golang.org/grpc/binarylog/grpc_binarylog_v1.Address
			 google.golang.org/grpc/binarylog/grpc_binarylog_v1.Address_Type
			*google.golang.org/grpc/binarylog/grpc_binarylog_v1.ClientHeader
			*google.golang.org/grpc/binarylog/grpc_binarylog_v1.GrpcLogEntry
			 google.golang.org/grpc/binarylog/grpc_binarylog_v1.GrpcLogEntry_EventType
			 google.golang.org/grpc/binarylog/grpc_binarylog_v1.GrpcLogEntry_Logger
			*google.golang.org/grpc/binarylog/grpc_binarylog_v1.Message
			*google.golang.org/grpc/binarylog/grpc_binarylog_v1.Metadata
			*google.golang.org/grpc/binarylog/grpc_binarylog_v1.MetadataEntry
			*google.golang.org/grpc/binarylog/grpc_binarylog_v1.ServerHeader
			*google.golang.org/grpc/binarylog/grpc_binarylog_v1.Trailer
			 google.golang.org/grpc/codes.Code
			 google.golang.org/grpc/connectivity.ServingMode
			 google.golang.org/grpc/connectivity.State
			 google.golang.org/grpc/credentials.SecurityLevel
			*google.golang.org/grpc/health/grpc_health_v1.HealthCheckRequest
			*google.golang.org/grpc/health/grpc_health_v1.HealthCheckResponse
			 google.golang.org/grpc/health/grpc_health_v1.HealthCheckResponse_ServingStatus
			*google.golang.org/grpc/health/grpc_health_v1.HealthListRequest
			*google.golang.org/grpc/health/grpc_health_v1.HealthListResponse
			*google.golang.org/grpc/internal/channelz.Channel
			*google.golang.org/grpc/internal/channelz.ChannelMetrics
			 google.golang.org/grpc/internal/channelz.Entity (interface)
			 google.golang.org/grpc/internal/channelz.Identifier (interface)
			 google.golang.org/grpc/internal/channelz.RefChannelType
			*google.golang.org/grpc/internal/channelz.Server
			*google.golang.org/grpc/internal/channelz.Socket
			*google.golang.org/grpc/internal/channelz.SubChannel
			 google.golang.org/grpc/internal/serviceconfig.Duration
			*google.golang.org/grpc/internal/status.Status
			*google.golang.org/grpc/peer.Peer
			 google.golang.org/grpc/resolver.Address
			 google.golang.org/grpc/resolver.Target
			 google.golang.org/protobuf/internal/encoding/json.Kind
			 google.golang.org/protobuf/internal/encoding/text.Kind
			 google.golang.org/protobuf/internal/encoding/text.NameKind
			 google.golang.org/protobuf/internal/impl.ValidationStatus
			 google.golang.org/protobuf/reflect/protoreflect.Cardinality
			 google.golang.org/protobuf/reflect/protoreflect.Kind
			 google.golang.org/protobuf/reflect/protoreflect.MapKey
			 google.golang.org/protobuf/reflect/protoreflect.SourcePath
			 google.golang.org/protobuf/reflect/protoreflect.Syntax
			 google.golang.org/protobuf/reflect/protoreflect.Value
			 google.golang.org/protobuf/runtime/protoiface.MessageV1 (interface)
			*google.golang.org/protobuf/types/known/anypb.Any
			*google.golang.org/protobuf/types/known/durationpb.Duration
			*google.golang.org/protobuf/types/known/timestamppb.Timestamp
			 internal/abi.Kind
			*internal/buildcfg.ExperimentFlags
			 internal/buildcfg.Goarm64Features
			 internal/buildcfg.GoarmFeatures
			*internal/godebug.Setting
			 internal/platform.OSArch
			*internal/profile.Graph
			*internal/profile.Profile
			 internal/reflectlite.Type (interface)
			 internal/types/errors.Code
			 io/fs.FileMode
			 math/big.Accuracy
			*math/big.Float
			*math/big.Int
			*math/big.Rat
			 math/big.RoundingMode
			 net.Addr (interface)
			 net.Flags
			 net.HardwareAddr
			 net.IP
			*net.IPAddr
			 net.IPMask
			*net.IPNet
			*net.TCPAddr
			*net.UDPAddr
			*net.UnixAddr
			 net/http.ConnState
			*net/http.Cookie
			 net/http.Protocols
			 net/netip.Addr
			 net/netip.AddrPort
			 net/netip.Prefix
			*net/url.URL
			*net/url.Userinfo
			*os.ProcessState
			 os.Signal (interface)
			*os/exec.Cmd
			 os/exec.ExitError
			 reflect.ChanDir
			 reflect.Kind
			 reflect.Type (interface)
			 reflect.Value
			*regexp.Regexp
			 regexp/syntax.ErrorCode
			*regexp/syntax.Inst
			 regexp/syntax.InstOp
			 regexp/syntax.Op
			*regexp/syntax.Prog
			*regexp/syntax.Regexp
			*strings.Builder
			 syscall.Signal
			 testing.BenchmarkResult
			*text/template/parse.ActionNode
			*text/template/parse.BoolNode
			*text/template/parse.BranchNode
			*text/template/parse.BreakNode
			*text/template/parse.ChainNode
			*text/template/parse.CommandNode
			*text/template/parse.CommentNode
			*text/template/parse.ContinueNode
			*text/template/parse.DotNode
			*text/template/parse.FieldNode
			*text/template/parse.IdentifierNode
			*text/template/parse.IfNode
			*text/template/parse.ListNode
			*text/template/parse.NilNode
			 text/template/parse.Node (interface)
			*text/template/parse.NumberNode
			*text/template/parse.PipeNode
			*text/template/parse.RangeNode
			*text/template/parse.StringNode
			*text/template/parse.TemplateNode
			*text/template/parse.TextNode
			*text/template/parse.VariableNode
			*text/template/parse.WithNode
			 time.Duration
			*time.Location
			 time.Month
			 time.Time
			 time.Weekday
			 vendor/golang.org/x/net/dns/dnsmessage.Class
			 vendor/golang.org/x/net/dns/dnsmessage.Name
			 vendor/golang.org/x/net/dns/dnsmessage.RCode
			 vendor/golang.org/x/net/dns/dnsmessage.Type
			 vendor/golang.org/x/net/http2/hpack.HeaderField
			*vendor/golang.org/x/net/idna.Profile
			*vendor/golang.org/x/text/unicode/bidi.Run
			
			 lockRank
			 stwReason
			 waitReason
			*runtime/pprof.labelMap
			*context.afterFuncCtx
			 context.backgroundCtx
			*context.cancelCtx
			 context.stringer (interface)
			*context.timerCtx
			 context.todoCtx
			*context.valueCtx
			 context.withoutCancelCtx
			*crypto/ecdh.nistCurve
			*crypto/ecdh.x25519Curve
			 crypto/tls.alert
			*embed.file
			 encoding/binary.bigEndian
			 encoding/binary.littleEndian
			 encoding/binary.nativeEndian
			*encoding/json.encodeState
			 flag.boolFlag (interface)
			 flag.boolFuncValue
			*flag.boolValue
			*flag.durationValue
			*flag.float64Value
			 flag.funcValue
			*flag.int64Value
			*flag.intValue
			*flag.stringValue
			 flag.textValue
			*flag.uint64Value
			*flag.uintValue
			 github.com/google/go-cmp/cmp.commentString
			 github.com/google/go-cmp/cmp.comparer
			*github.com/google/go-cmp/cmp.defaultReporter
			 github.com/google/go-cmp/cmp.diffStats
			 github.com/google/go-cmp/cmp.ignore
			 github.com/google/go-cmp/cmp.indirect
			 github.com/google/go-cmp/cmp.mapIndex
			 github.com/google/go-cmp/cmp.pathFilter
			 github.com/google/go-cmp/cmp.pathStep
			 github.com/google/go-cmp/cmp.sliceIndex
			 github.com/google/go-cmp/cmp.structField
			 github.com/google/go-cmp/cmp.textLine
			 github.com/google/go-cmp/cmp.textList
			 github.com/google/go-cmp/cmp.textNode (interface)
			*github.com/google/go-cmp/cmp.textWrap
			 github.com/google/go-cmp/cmp.transform
			 github.com/google/go-cmp/cmp.transformer
			 github.com/google/go-cmp/cmp.typeAssertion
			 github.com/google/go-cmp/cmp.valuesFilter
			 go/constant.boolVal
			 go/constant.complexVal
			 go/constant.floatVal
			 go/constant.int64Val
			 go/constant.intVal
			 go/constant.ratVal
			*go/constant.stringVal
			 go/constant.unknownVal
			*go/types._TypeSet
			 go/types.color
			 go/types.dependency (interface)
			 go/types.genericType (interface)
			*go/types.lazyObject
			*go/types.object
			*go/types.operand
			*go/types.term
			 go/types.termlist
			*go/types.unifier
			 go/types.unifyMode
			 go.pact.im/x/netchan.chanAddr
			 go.uber.org/mock/gomock.allMatcher
			 go.uber.org/mock/gomock.anyMatcher
			 go.uber.org/mock/gomock.anyOfMatcher
			 go.uber.org/mock/gomock.assignableToTypeOfMatcher
			 go.uber.org/mock/gomock.condMatcher[...]
			 go.uber.org/mock/gomock.eqMatcher
			 go.uber.org/mock/gomock.inAnyOrderMatcher
			 go.uber.org/mock/gomock.lenMatcher
			 go.uber.org/mock/gomock.nilMatcher
			 go.uber.org/mock/gomock.notMatcher
			 go.uber.org/mock/gomock.regexMatcher
			 golang.org/x/net/http2.streamState
			*golang.org/x/net/http2.writeData
			 golang.org/x/net/trace.cond (interface)
			*golang.org/x/net/trace.discarded
			 golang.org/x/net/trace.errorCond
			*golang.org/x/net/trace.histogram
			*golang.org/x/net/trace.lazySprintf
			 golang.org/x/net/trace.minCond
			*golang.org/x/text/unicode/bidi.bracketPair
			 golang.org/x/tools/go/packages.loaderPackage
			 golang.org/x/tools/internal/gcimporter.anyType
			*golang.org/x/tools/internal/gcimporter.intWriter
			*golang.org/x/tools/internal/gcimporter.reader
			*golang.org/x/tools/internal/typeparams.term
			 golang.org/x/tools/internal/typeparams.termlist
			*google.golang.org/grpc.acBalancerWrapper
			*google.golang.org/grpc.firstLine
			*google.golang.org/grpc.fmtStringer
			 google.golang.org/grpc.payload
			 google.golang.org/grpc.stringer
			*google.golang.org/grpc/internal/channelz.dummyEntry
			 google.golang.org/grpc/internal/channelz.entry (interface)
			 google.golang.org/grpc/internal/transport.strAddr
			*google.golang.org/grpc/mem.buffer
			 google.golang.org/protobuf/internal/filedesc.enumRange
			 google.golang.org/protobuf/internal/filedesc.fieldRange
			 google.golang.org/protobuf/internal/impl.legacyMessageWrapper
			 gotest.tools/v3/internal/source.debugFormatNode
			 html/template.attr
			 html/template.context
			 html/template.delim
			 html/template.element
			 html/template.jsCtx
			 html/template.state
			 html/template.urlPart
			 internal/buildcfg.gowasmFeatures
			 internal/reflectlite.rtype
			 io/fs.dirInfo
			*math/big.decimal
			 math/big.nat
			 net.addrPortUDPAddr
			 net.fileAddr
			 net.hostLookupOrder
			 net.pipeAddr
			 net.sockaddr (interface)
			 net/http.connectMethodKey
			*net/http.contextKey
			 net/http.http2ContinuationFrame
			 net/http.http2DataFrame
			 net/http.http2ErrCode
			 net/http.http2FrameHeader
			 net/http.http2FrameType
			 net/http.http2FrameWriteRequest
			 net/http.http2GoAwayFrame
			 net/http.http2HeadersFrame
			 net/http.http2MetaHeadersFrame
			 net/http.http2PingFrame
			 net/http.http2PriorityFrame
			 net/http.http2PushPromiseFrame
			 net/http.http2RSTStreamFrame
			 net/http.http2Setting
			 net/http.http2SettingID
			 net/http.http2SettingsFrame
			 net/http.http2streamState
			 net/http.http2UnknownFrame
			 net/http.http2WindowUpdateFrame
			*net/http.http2writeData
			*net/http.pattern
			*net/http.socksAddr
			 net/http.socksCommand
			 net/http.socksReply
			*os.unixDirent
			*reflect.rtype
			*regexp.onePassInst
			*strconv.decimal
			*testing.chattyFlag
			*testing.durationOrCountFlag
			 testing.fuzzResult
			*text/template/parse.elseNode
			*text/template/parse.endNode
			 text/template/parse.item
			*vendor/golang.org/x/text/unicode/bidi.bracketPair
		
			 stringer : expvar.Var
			 stringer : fmt.Stringer
			
			 stringer : context.stringer
	
		The specialized convTx routines need a type descriptor to use when calling mallocgc.
		We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
		However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
		so we use named types here.
		We then construct interface values of these types,
		and then extract the type word to use as needed.
	 type structtype = abi.StructType (struct)	
		stwReason is an enumeration of reasons the world is stopping.
		
			( stwReason) String() string
			
			( stwReason) isGC() bool
		
			 stwReason : expvar.Var
			 stwReason : fmt.Stringer
			
			 stwReason : stringer
			 stwReason : context.stringer
		
			
			func stopTheWorld(reason stwReason) worldStop
			func stopTheWorldGC(reason stwReason) worldStop
			func stopTheWorldWithSema(reason stwReason) worldStop
		
			
			const stwAllGoroutinesStack
			const stwAllThreadsSyscall
			const stwForTestCountPagesInUse
			const stwForTestPageCachePagesLeaked
			const stwForTestReadMemStatsSlow
			const stwForTestReadMetricsSlow
			const stwForTestResetDebugLog
			const stwGCMarkTerm
			const stwGCSweepTerm
			const stwGOMAXPROCS
			const stwGoroutineProfile
			const stwGoroutineProfileCleanup
			const stwReadMemStats
			const stwStartTrace
			const stwStopTrace
			const stwUnknown
			const stwWriteHeapDump
	
		sudog (pseudo-g) represents a g in a wait list, such as for sending/receiving
		on a channel.
		
		sudog is necessary because the g ↔ synchronization object relation
		is many-to-many. A g can be on many wait lists, so there may be
		many sudogs for one g; and many gs may be waiting on the same
		synchronization object, so there may be many sudogs for one object.
		
		sudogs are allocated from a special pool. Use acquireSudog and
		releaseSudog to allocate and free them.
		
			
			acquiretime int64
			
				// channel
			
				// data element (may point to stack)
			g *g
			
				isSelect indicates g is participating in a select, so
				g.selectDone must be CAS'd to win the wake-up race.
			next *sudog
			
				// semaRoot binary tree
			prev *sudog
			releasetime int64
			
				success indicates whether communication over channel c
				succeeded. It is true if the goroutine was awoken because a
				value was delivered over channel c, and false if awoken
				because c was closed.
			ticket uint32
			
				waiters is a count of semaRoot waiting list other than head of list,
				clamped to a uint16 to fit in unused space.
				Only meaningful at the head of the list.
				(If we wanted to be overly clever, we could store a high 16 bits
				in the second entry in the list.)
			
				// g.waiting list or semaRoot
			
				// semaRoot
		
			
			func acquireSudog() *sudog
		
			
			func racenotify(c *hchan, idx uint, sg *sudog)
			func racesync(c *hchan, sg *sudog)
			func readyWithTime(s *sudog, traceskip int)
			func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
			func recvDirect(t *_type, sg *sudog, dst unsafe.Pointer)
			func releaseSudog(s *sudog)
			func send(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
			func sendDirect(t *_type, sg *sudog, src unsafe.Pointer)
	
		
			
			
				dead indicates the goroutine was not suspended because it
				is dead. This goroutine could be reused after the dead
				state was observed, so the caller must not assume that it
				remains dead.
			g *g
			
				stopped indicates that this suspendG transitioned the G to
				_Gwaiting via g.preemptStop and thus is responsible for
				readying it when done.
		
			
			func suspendG(gp *g) suspendGState
		
			
			func resumeG(state suspendGState)
	
		sweepClass is a spanClass and one bit to represent whether we're currently
		sweeping partial or full spans.
		
			
			(*sweepClass) clear()
			(*sweepClass) load() sweepClass
			
				split returns the underlying span class as well as
				whether we're interested in the full or partial
				unswept lists for that class, indicated as a boolean
				(true means "full").
			(*sweepClass) update(sNew sweepClass)
		
			
			const sweepClassDone
	
		State of background sweep.
		
			
			
				active tracks outstanding sweepers and the sweep
				termination condition.
			
				centralIndex is the current unswept span class.
				It represents an index into the mcentral span
				sets. Accessed and updated via its load and
				update methods. Not protected by a lock.
				
				Reset at mark termination.
				Used by mheap.nextSpanForSweep.
			g *g
			lock mutex
			parked bool
		
			
			  var sweep
	
		sweepLocked represents sweep ownership of a span.
		
			
			mspan *mspan
			
				allocBits and gcmarkBits hold pointers to a span's mark and
				allocation bits. The pointers are 8 byte aligned.
				There are three arenas where this data is held.
				free: Dirty arenas that are no longer accessed
				      and can be reused.
				next: Holds information to be used in the next GC cycle.
				current: Information being used during this GC cycle.
				previous: Information being used during the last GC cycle.
				A new GC cycle starts with the call to finishsweep_m.
				finishsweep_m moves the previous arena to the free arena,
				the current arena to the previous arena, and
				the next arena to the current arena.
				The next arena is populated as the spans request
				memory to hold gcmarkBits for the next GC cycle as well
				as allocBits for newly allocated spans.
				
				The pointer arithmetic is done "by hand" instead of using
				arrays to avoid bounds checks along critical performance
				paths.
				The sweep will free the old allocBits and set allocBits to the
				gcmarkBits. The gcmarkBits are replaced with a fresh zeroed
				out memory.
			
				Cache of the allocBits at freeindex. allocCache is shifted
				such that the lowest bit corresponds to the bit freeindex.
				allocCache holds the complement of allocBits, thus allowing
				ctz (count trailing zero) to use it directly.
				allocCache may contain bits beyond s.nelems; the caller must ignore
				these.
			
				// number of allocated objects
			
				// a copy of allocCount that is stored just before this span is cached
			
				// for divide by elemsize
			
				// computed from sizeclass or from npages
			
				freeIndexForScan is like freeindex, except that freeindex is
				used by the allocator whereas freeIndexForScan is used by the
				GC scanner. They are two fields so that the GC sees the object
				is allocated only when the object and the heap bits are
				initialized (see also the assignment of freeIndexForScan in
				mallocgc, and issue 54596).
			
				freeindex is the slot index between 0 and nelems at which to begin scanning
				for the next free object in this span.
				Each allocation scans allocBits starting at freeindex until it encounters a 0
				indicating a free object. freeindex is then adjusted so that subsequent scans begin
				just past the newly discovered free object.
				
				If freeindex == nelem, this span has no free objects.
				
				allocBits is a bitmap of objects in this span.
				If n >= freeindex and allocBits[n/8] & (1<<(n%8)) is 0
				then object n is free;
				otherwise, object n is allocated. Bits starting at nelem are
				undefined and should never be referenced.
				
				Object n starts at address n*elemsize + (start << pageShift).
			mspan.gcmarkBits *gcBits
			
				// whether or not this span represents a user arena
			
				// malloc header for large objects.
			
				// end of data in span
			
				// For debugging.
			
				// list of free objects in mSpanManual spans
			
				// needs to be zeroed before allocation
			
				TODO: Look up nelems from sizeclass and remove this field if it
				helps performance.
				// number of object in the span.
			
				// next span in list, or nil if none
			
				// number of pages in span
			
				// bitmap for pinned objects; accessed atomically
			
				// previous span in list, or nil if none
			
				// size class and noscan (uint8)
			
				// guards specials list and changes to pinnerBits
			
				// linked list of special records sorted by offset.
			
				// address of first byte of span aka s.base()
			
				// mSpanInUse etc; accessed atomically (get/set methods)
			mspan.sweepgen uint32
			
				// interval for managing chunk allocation
		
			
			( sweepLocked) allocBitsForIndex(allocBitIndex uintptr) markBits
			( sweepLocked) base() uintptr
			
				countAlloc returns the number of objects allocated in span s by
				scanning the mark bitmap.
			
				decPinCounter decreases the counter. If the counter reaches 0, the counter
				special is deleted and false is returned. Otherwise true is returned.
			
				divideByElemSize returns n/s.elemsize.
				n must be within [0, s.npages*_PageSize),
				or may be exactly s.npages*_PageSize
				if s.elemsize is from sizeclasses.go.
				
				nosplit, because it is called by objIndex, which is nosplit
			
				Returns only when span s has been swept.
			
				nosplit, because it's called by isPinned, which is nosplit
			
				heapBits returns the heap ptr/scalar bits stored at the end of the span for
				small object spans and heap arena spans.
				
				Note that the uintptr of each element means something different for small object
				spans and for heap arena spans. Small object spans are easy: they're never interpreted
				as anything but uintptr, so they're immune to differences in endianness. However, the
				heapBits for user arena spans is exposed through a dummy type descriptor, so the byte
				ordering needs to match the same byte ordering the compiler would emit. The compiler always
				emits the bitmap data in little endian byte ordering, so on big endian platforms these
				uintptrs will have their byte orders swapped from what they normally would be.
				
				heapBitsInSpan(span.elemsize) or span.isUserArenaChunk must be true.
			
				heapBitsSmallForAddr loads the heap bits for the object stored at addr from span.heapBits.
				
				addr must be the base pointer of an object in the span. heapBitsInSpan(span.elemsize)
				must be true.
			( sweepLocked) inList() bool
			
				incPinCounter is only called for multiple pins of the same object and records
				the _additional_ pins.
			
				Initialize a new span with the given start and npages.
			
				initHeapBits initializes the heap bitmap for a span.
			
				isFree reports whether the index'th object in s is unallocated.
				
				The caller must ensure s.state is mSpanInUse, and there must have
				been no preemption points since ensuring this (which could allow a
				GC transition, which would allow the state to change).
			
				isUnusedUserArenaChunk indicates that the arena chunk has been set to fault
				and doesn't contain any scannable memory anymore. However, it might still be
				mSpanInUse as it sits on the quarantine list, since it needs to be swept.
				
				This is not safe to execute unless the caller has ownership of the mspan or
				the world is stopped (preemption is prevented while the relevant state changes).
				
				This is really only meant to be used by accounting tests in the runtime to
				distinguish when a span shouldn't be counted (since mSpanInUse might not be
				enough).
			( sweepLocked) layout() (size, n, total uintptr)
			( sweepLocked) markBitsForBase() markBits
			( sweepLocked) markBitsForIndex(objIndex uintptr) markBits
			
				newPinnerBits returns a pointer to 8 byte aligned bytes to be used for this
				span's pinner bits. newPinnerBits is used to mark objects that are pinned.
				They are copied when the span is swept.
			
				nextFreeIndex returns the index of the next free object in s at
				or after s.freeindex.
				There are hardware instructions that can be used to make this
				faster if profiling warrants it.
			
				objBase returns the base pointer for the object containing addr in span.
				
				Assumes that addr points into a valid part of span (span.base() <= addr < span.limit).
			
				nosplit, because it is called by other nosplit code like findObject
			( sweepLocked) pinnerBitSize() uintptr
			
				refillAllocCache takes 8 bytes s.allocBits starting at whichByte
				and negates them so that ctz (count trailing zeros) instructions
				can be used. It then places these 8 bytes into the cached 64 bit
				s.allocCache.
			
				refreshPinnerBits replaces pinnerBits with a fresh copy in the arenas for the
				next GC cycle. If it does not contain any pinned objects, pinnerBits of the
				span is set to nil.
			
				reportZombies reports any marked but free objects in s and throws.
				
				This generally means one of the following:
				
				1. User code converted a pointer to a uintptr and then back
				unsafely, and a GC ran while the uintptr was the only reference to
				an object.
				
				2. User code (or a compiler bug) constructed a bad pointer that
				points to a free slot, often a past-the-end pointer.
				
				3. The GC two cycles ago missed a pointer and freed a live object,
				but it was still live in the last cycle, so this GC cycle found a
				pointer to that object and marked it.
			( sweepLocked) setPinnerBits(p *pinnerBits)
			
				setUserArenaChunkToFault sets the address space for the user arena chunk to fault
				and releases any underlying memory resources.
				
				Must be in a non-preemptible state to ensure the consistency of statistics
				exported to MemStats.
			
				Find a splice point in the sorted list and check for an already existing
				record. Returns a pointer to the next-reference in the list predecessor.
				Returns true, if the referenced item is an exact match.
			
				sweep frees or collects finalizers for blocks not marked in the mark phase.
				It clears the mark bits in preparation for the next GC round.
				Returns true if the span was returned to heap.
				If preserve=true, don't return it to heap nor relink in mcentral lists;
				caller takes care of it.
			
				typePointersOf returns an iterator over all heap pointers in the range [addr, addr+size).
				
				addr and addr+size must be in the range [span.base(), span.limit).
				
				Note: addr+size must be passed as the limit argument to the iterator's next method on
				each iteration. This slightly awkward API is to allow typePointers to be destructured
				by the compiler.
				
				nosplit because it is used during write barriers and must not be preempted.
			
				typePointersOfType is like typePointersOf, but assumes addr points to one or more
				contiguous instances of the provided type. The provided type must not be nil.
				
				It returns an iterator that tiles typ's gcmask starting from addr. It's the caller's
				responsibility to limit iteration.
				
				nosplit because its callers are nosplit and require all their callees to be nosplit.
			
				typePointersOfUnchecked is like typePointersOf, but assumes addr is the base
				of an allocation slot in a span (the start of the object if no header, the
				header otherwise). It returns an iterator that generates all pointers
				in the range [addr, addr+span.elemsize).
				
				nosplit because it is used during write barriers and must not be preempted.
			
				userArenaNextFree reserves space in the user arena for an item of the specified
				type. If cap is not -1, this is for an array of cap elements of type t.
			
				writeHeapBitsSmall writes the heap bits for small objects whose ptr/scalar data is
				stored as a bitmap at the end of the span.
				
				Assumes dataSize is <= ptrBits*goarch.PtrSize. x must be a pointer into the span.
				heapBitsInSpan(dataSize) must be true. dataSize must be >= typ.Size_.
			( sweepLocked) writeUserArenaHeapBits(addr uintptr) (h writeUserArenaHeapBits)
	
		sweepLocker acquires sweep ownership of spans.
		
			
			
				sweepGen is the sweep generation of the heap.
			valid bool
		
			
			
				tryAcquire attempts to acquire sweep ownership of span s. If it
				successfully acquires ownership, it blocks sweep completion.
	
		A synctestGroup is a group of goroutines started by synctest.Run.
		
			
			
				// other sources of activity
			mu mutex
			
				// current fake time
			
				// caller of synctest.Run
			
				// non-blocked goroutines
			timers timers
			
				The group is active (not blocked) so long as running > 0 || active > 0.
				
				running is the number of goroutines which are not "durably blocked":
				Goroutines which are either running, runnable, or non-durably blocked
				(for example, blocked in a syscall).
				
				active is used to keep the group from becoming blocked,
				even if all goroutines in the group are blocked.
				For example, park_m can choose to immediately unpark a goroutine after parking it.
				It increments the active count to keep the group active until it has determined
				that the park operation has completed.
				// total goroutines
			
				// caller of synctest.Wait
			
				// true if a goroutine is calling synctest.Wait
		
			
			
				changegstatus is called when the non-lock status of a g changes.
				It is never called with a Gscanstatus.
			
				decActive decrements the active-count for the group.
			
				incActive increments the active-count for the group.
				A group does not become durably blocked while the active-count is non-zero.
			
				maybeWakeLocked returns a g to wake if the group is durably blocked.
			(*synctestGroup) raceaddr() unsafe.Pointer
	
		sysMemStat represents a global system statistic that is managed atomically.
		
		This type must structurally be a uint64 so that mstats aligns with MemStats.
		
			
			
				add atomically adds the sysMemStat by n.
				
				Must be nosplit as it is called in runtime initialization, e.g. newosproc0.
			
				load atomically reads the value of the stat.
				
				Must be nosplit as it is called in runtime initialization, e.g. newosproc0.
		
			
			func persistentalloc(size, align uintptr, sysStat *sysMemStat) unsafe.Pointer
			func persistentalloc1(size, align uintptr, sysStat *sysMemStat) *notInHeap
			func sysAlloc(n uintptr, sysStat *sysMemStat) unsafe.Pointer
			func sysFree(v unsafe.Pointer, n uintptr, sysStat *sysMemStat)
			func sysMap(v unsafe.Pointer, n uintptr, sysStat *sysMemStat)
	
		sysStatsAggregate represents system memory stats obtained
		from the runtime. This set of stats is grouped together because
		they're all relatively cheap to acquire and generally independent
		of one another and other runtime memory stats. The fact that they
		may be acquired at different times, especially with respect to
		heapStatsAggregate, means there could be some skew, but because of
		these stats are independent, there's no real consistency issue here.
		
			
			buckHashSys uint64
			gcCyclesDone uint64
			gcCyclesForced uint64
			gcMiscSys uint64
			heapGoal uint64
			mCacheInUse uint64
			mCacheSys uint64
			mSpanInUse uint64
			mSpanSys uint64
			otherSys uint64
			stacksSys uint64
		
			
			
				compute populates the sysStatsAggregate with values from the runtime.
	
		taggedPointer is a pointer with a numeric tag.
		The size of the numeric tag is GOARCH-dependent,
		currently at least 10 bits.
		This should only be used with pointers allocated outside the Go heap.
		
			
			
				Pointer returns the pointer from a taggedPointer.
			
				Tag returns the tag from a taggedPointer.
		
			
			func taggedPointerPack(ptr unsafe.Pointer, tag uintptr) taggedPointer
	
		
			
			
				// relocated section address
			
				// vaddr + section length
			
				// prelinked section vaddr
	
		throwType indicates the current type of ongoing throw, which affects the
		amount of detail printed to stderr. Higher values include more detail.
		
			
			func fatalthrow(t throwType)
		
			
			const throwTypeNone
			const throwTypeRuntime
			const throwTypeUser
	
		
			
			
				lock protects access to start* and val.
			startTicks int64
			startTime int64
			val atomic.Int64
		
			
			
				init initializes ticks to maximize the chance that we have a good ticksPerSecond reference.
				
				Must not run concurrently with ticksPerSecond.
		
			
			  var ticks
	
		timeHistogram represents a distribution of durations in
		nanoseconds.
		
		The accuracy and range of the histogram is defined by the
		timeHistSubBucketBits and timeHistNumBuckets constants.
		
		It is an HDR histogram with exponentially-distributed
		buckets and linearly distributed sub-buckets.
		
		The histogram is safe for concurrent reads and writes.
		
			
			counts [160]atomic.Uint64
			
				overflow counts all the times we got a duration that exceeded
				the range counts represents.
			
				underflow counts all the times we got a negative duration
				sample. Because of how time works on some platforms, it's
				possible to measure negative durations. We could ignore them,
				but we record them anyway because it's better to have some
				signal that it's happening than just missing samples.
		
			
			
				record adds the given duration to the distribution.
				
				Disallow preemptions and stack growths because this function
				may run in sensitive locations.
			
				write dumps the histogram to the passed metricValue as a float64 histogram.
	
		A timer is a potentially repeating trigger for calling t.f(t.arg, t.seq).
		Timers are allocated by client code, often as part of other data structures.
		Each P has a heap of pointers to timers that it manages.
		
		A timer is expected to be used by only one client goroutine at a time,
		but there will be concurrent access by the P managing that timer.
		Timer accesses are protected by the lock t.mu, with a snapshot of
		t's state bits published in t.astate to enable certain fast paths to make
		decisions about a timer without acquiring the lock.
		
			
			arg any
			
				// atomic copy of state bits at last unlock
			
				// number of goroutines blocked on timer's channel
			f func(arg any, seq uintptr, delay int64)
			
				// timer has a channel; immutable; can be read without lock
			
				// timer is using fake time; immutable; can be read without lock
			
				isSending is used to handle races between running a
				channel timer and stopping or resetting the timer.
				It is used only for channel timers (t.isChan == true).
				It is not used for tickers.
				The value is incremented when about to send a value on the channel,
				and decremented after sending the value.
				The stop/reset code uses this to detect whether it
				stopped the channel send.
				
				isSending is incremented only when t.mu is held.
				isSending is decremented only when t.sendLock is held.
				isSending is read only when both t.mu and t.sendLock are held.
			
				mu protects reads and writes to all fields, with exceptions noted below.
			period int64
			
				sendLock protects sends on the timer's channel.
				Not used for async (pre-Go 1.23) behavior when debug.asynctimerchan.Load() != 0.
			seq uintptr
			
				// state bits
			
				If non-nil, the timers containing t.
			
				Timer wakes up at when, and then at when+period, ... (period > 0 only)
				each time calling f(arg, seq, delay) in the timer goroutine, so f must be
				a well-behaved function and not block.
				
				The arg and seq are client-specified opaque arguments passed back to f.
				When used from netpoll, arg and seq have meanings defined by netpoll
				and are completely opaque to this code; in that context, seq is a sequence
				number to recognize and squelch stale function invocations.
				When used from package time, arg is a channel (for After, NewTicker)
				or the function to call (for AfterFunc) and seq is unused (0).
				
				Package time does not know about seq, but if this is a channel timer (t.isChan == true),
				this file uses t.seq as a sequence number to recognize and squelch
				sends that correspond to an earlier (stale) timer configuration,
				similar to its use in netpoll. In this usage (that is, when t.isChan == true),
				writes to seq are protected by both t.mu and t.sendLock,
				so reads are allowed when holding either of the two mutexes.
				
				The delay argument is nanotime() - t.when, meaning the delay in ns between
				when the timer should have gone off and now. Normally that amount is
				small enough not to matter, but for channel timers that are fed lazily,
				the delay can be arbitrarily long; package time subtracts it out to make
				it look like the send happened earlier than it actually did.
				(No one looked at the channel since then, or the send would have
				not happened so late, so no one can tell the difference.)
		
			
			
				hchan returns the channel in t.arg.
				t must be a timer with a channel.
			
				init initializes a newly allocated timer t.
				Any code that allocates a timer must call t.init before using it.
				The arg and f can be set during init, or they can be nil in init
				and set by a future call to t.modify.
			
				lock locks the timer, allowing reading or writing any of the timer fields.
			
				maybeAdd adds t to the local timers heap if it needs to be in a heap.
				The caller must not hold t's lock nor any timers heap lock.
				The caller probably just unlocked t, but that lock must be dropped
				in order to acquire a ts.lock, to avoid lock inversions.
				(timers.adjust holds ts.lock while acquiring each t's lock,
				so we cannot hold any t's lock while acquiring ts.lock).
				
				Strictly speaking it *might* be okay to hold t.lock and
				acquire ts.lock at the same time, because we know that
				t is not in any ts.heap, so nothing holding a ts.lock would
				be acquiring the t.lock at the same time, meaning there
				isn't a possible deadlock. But it is easier and safer not to be
				too clever and respect the static ordering.
				(If we don't, we have to change the static lock checking of t and ts.)
				
				Concurrent calls to time.Timer.Reset or blockTimerChan
				may result in concurrent calls to t.maybeAdd,
				so we cannot assume that t is not in a heap on entry to t.maybeAdd.
			
				maybeRunAsync checks whether t needs to be triggered and runs it if so.
				The caller is responsible for locking the timer and for checking that we
				are running timers in async mode. If the timer needs to be run,
				maybeRunAsync will unlock and re-lock it.
				The timer is always locked on return.
			
				maybeRunChan checks whether the timer needs to run
				to send a value to its associated channel. If so, it does.
				The timer must not be locked.
			
				modify modifies an existing timer.
				This is called by the netpoll code or time.Ticker.Reset or time.Timer.Reset.
				Reports whether the timer was modified before it was run.
				If f == nil, then t.f, t.arg, and t.seq are not modified.
			
				needsAdd reports whether t needs to be added to a timers heap.
				t must be locked.
			
				reset resets the time when a timer should fire.
				If used for an inactive timer, the timer will become active.
				Reports whether the timer was active and was stopped.
			
				stop stops the timer t. It may be on some other P, so we can't
				actually remove it from the timers heap. We can only mark it as stopped.
				It will be removed in due course by the P whose heap it is on.
				Reports whether the timer was stopped before it was run.
			(*timer) trace(op string)
			(*timer) trace1(op string)
			
				unlock updates t.astate and unlocks the timer.
			
				unlockAndRun unlocks and runs the timer t (which must be locked).
				If t is in a timer set (t.ts != nil), the caller must also have locked the timer set,
				and this call will temporarily unlock the timer set while running the timer function.
				unlockAndRun returns with t unlocked and t.ts (re-)locked.
			
				updateHeap updates t as directed by t.state, updating t.state
				and returning a bool indicating whether the state (and ts.heap[0].when) changed.
				The caller must hold t's lock, or the world can be stopped instead.
				The timer set t.ts must be non-nil and locked, t must be t.ts.heap[0], and updateHeap
				takes care of moving t within the timers heap to preserve the heap invariants.
				If ts == nil, then t must not be in a heap (or is in a heap that is
				temporarily not maintaining its invariant, such as during timers.adjust).
	
		A timers is a per-P set of timers.
		
			
			
				heap is the set of timers, ordered by heap[i].when.
				Must hold lock to access.
			
				len is an atomic copy of len(heap).
			
				minWhenHeap is the minimum heap[i].when value (= heap[0].when).
				The wakeTime method uses minWhenHeap and minWhenModified
				to determine the next wake time.
				If minWhenHeap = 0, it means there are no timers in the heap.
			
				minWhenModified is a lower bound on the minimum
				heap[i].when over timers with the timerModified bit set.
				If minWhenModified = 0, it means there are no timerModified timers in the heap.
			
				mu protects timers; timers are per-P, but the scheduler can
				access the timers of another P, so we have to lock.
			
				raceCtx is the race context used while executing timer functions.
			syncGroup *synctestGroup
			
				zombies is the number of timers in the heap
				that are marked for removal.
		
			
			
				addHeap adds t to the timers heap.
				The caller must hold ts.lock or the world must be stopped.
				The caller must also have checked that t belongs in the heap.
				Callers that are not sure can call t.maybeAdd instead,
				but note that maybeAdd has different locking requirements.
			
				adjust looks through the timers in ts.heap for
				any timers that have been modified to run earlier, and puts them in
				the correct place in the heap. While looking for those timers,
				it also moves timers that have been modified to run later,
				and removes deleted timers. The caller must have locked ts.
			
				check runs any timers in ts that are ready.
				If now is not 0 it is the current time.
				It returns the passed time or the current time if now was passed as 0.
				and the time when the next timer should run or 0 if there is no next timer,
				and reports whether it ran any timers.
				If the time when the next timer should run is not 0,
				it is always larger than the returned time.
				We pass now in and out to avoid extra calls of nanotime.
			
				cleanHead cleans up the head of the timer queue. This speeds up
				programs that create and delete timers; leaving them in the heap
				slows down heap operations.
				The caller must have locked ts.
			
				deleteMin removes timer 0 from ts.
				ts must be locked.
			
				initHeap reestablishes the heap order in the slice ts.heap.
				It takes O(n) time for n=len(ts.heap), not the O(n log n) of n repeated add operations.
			(*timers) lock()
			
				run examines the first timer in ts. If it is ready based on now,
				it runs the timer and removes or updates it.
				Returns 0 if it ran a timer, -1 if there are no more timers, or the time
				when the first timer should run.
				The caller must have locked ts.
				If a timer is run, this will temporarily unlock ts.
			
				siftDown puts the timer at position i in the right place
				in the heap by moving it down toward the bottom of the heap.
			
				siftUp puts the timer at position i in the right place
				in the heap by moving it up toward the top of the heap.
			
				take moves any timers from src into ts
				and then clears the timer state from src,
				because src is being destroyed.
				The caller must not have locked either timers.
				For now this is only called when the world is stopped.
			(*timers) trace(op string)
			(*timers) unlock()
			
				updateMinWhenHeap sets ts.minWhenHeap to ts.heap[0].when.
				The caller must have locked ts or the world must be stopped.
			
				updateMinWhenModified updates ts.minWhenModified to be <= when.
				ts need not be (and usually is not) locked.
			
				verifyTimerHeap verifies that the timers is in a valid state.
				This is only for debugging, and is only called if verifyTimers is true.
				The caller must have locked ts.
			
				wakeTime looks at ts's timers and returns the time when we
				should wake up the netpoller. It returns 0 if there are no timers.
				This function is invoked when dropping a P, so it must run without
				any write barriers.
	
		A timeTimer is a runtime-allocated time.Timer or time.Ticker
		with the additional runtime state following it.
		The runtime state is inaccessible to package time.
		
			
			
				// <-chan time.Time
			init bool
			timer timer
			timer.arg any
			
				// atomic copy of state bits at last unlock
			
				// number of goroutines blocked on timer's channel
			timer.f func(arg any, seq uintptr, delay int64)
			
				// timer has a channel; immutable; can be read without lock
			
				// timer is using fake time; immutable; can be read without lock
			
				isSending is used to handle races between running a
				channel timer and stopping or resetting the timer.
				It is used only for channel timers (t.isChan == true).
				It is not used for tickers.
				The value is incremented when about to send a value on the channel,
				and decremented after sending the value.
				The stop/reset code uses this to detect whether it
				stopped the channel send.
				
				isSending is incremented only when t.mu is held.
				isSending is decremented only when t.sendLock is held.
				isSending is read only when both t.mu and t.sendLock are held.
			
				mu protects reads and writes to all fields, with exceptions noted below.
			timer.period int64
			
				sendLock protects sends on the timer's channel.
				Not used for async (pre-Go 1.23) behavior when debug.asynctimerchan.Load() != 0.
			timer.seq uintptr
			
				// state bits
			
				If non-nil, the timers containing t.
			
				Timer wakes up at when, and then at when+period, ... (period > 0 only)
				each time calling f(arg, seq, delay) in the timer goroutine, so f must be
				a well-behaved function and not block.
				
				The arg and seq are client-specified opaque arguments passed back to f.
				When used from netpoll, arg and seq have meanings defined by netpoll
				and are completely opaque to this code; in that context, seq is a sequence
				number to recognize and squelch stale function invocations.
				When used from package time, arg is a channel (for After, NewTicker)
				or the function to call (for AfterFunc) and seq is unused (0).
				
				Package time does not know about seq, but if this is a channel timer (t.isChan == true),
				this file uses t.seq as a sequence number to recognize and squelch
				sends that correspond to an earlier (stale) timer configuration,
				similar to its use in netpoll. In this usage (that is, when t.isChan == true),
				writes to seq are protected by both t.mu and t.sendLock,
				so reads are allowed when holding either of the two mutexes.
				
				The delay argument is nanotime() - t.when, meaning the delay in ns between
				when the timer should have gone off and now. Normally that amount is
				small enough not to matter, but for channel timers that are fed lazily,
				the delay can be arbitrarily long; package time subtracts it out to make
				it look like the send happened earlier than it actually did.
				(No one looked at the channel since then, or the send would have
				not happened so late, so no one can tell the difference.)
		
			
			
				hchan returns the channel in t.arg.
				t must be a timer with a channel.
			
				lock locks the timer, allowing reading or writing any of the timer fields.
			
				maybeAdd adds t to the local timers heap if it needs to be in a heap.
				The caller must not hold t's lock nor any timers heap lock.
				The caller probably just unlocked t, but that lock must be dropped
				in order to acquire a ts.lock, to avoid lock inversions.
				(timers.adjust holds ts.lock while acquiring each t's lock,
				so we cannot hold any t's lock while acquiring ts.lock).
				
				Strictly speaking it *might* be okay to hold t.lock and
				acquire ts.lock at the same time, because we know that
				t is not in any ts.heap, so nothing holding a ts.lock would
				be acquiring the t.lock at the same time, meaning there
				isn't a possible deadlock. But it is easier and safer not to be
				too clever and respect the static ordering.
				(If we don't, we have to change the static lock checking of t and ts.)
				
				Concurrent calls to time.Timer.Reset or blockTimerChan
				may result in concurrent calls to t.maybeAdd,
				so we cannot assume that t is not in a heap on entry to t.maybeAdd.
			
				maybeRunAsync checks whether t needs to be triggered and runs it if so.
				The caller is responsible for locking the timer and for checking that we
				are running timers in async mode. If the timer needs to be run,
				maybeRunAsync will unlock and re-lock it.
				The timer is always locked on return.
			
				maybeRunChan checks whether the timer needs to run
				to send a value to its associated channel. If so, it does.
				The timer must not be locked.
			
				modify modifies an existing timer.
				This is called by the netpoll code or time.Ticker.Reset or time.Timer.Reset.
				Reports whether the timer was modified before it was run.
				If f == nil, then t.f, t.arg, and t.seq are not modified.
			
				needsAdd reports whether t needs to be added to a timers heap.
				t must be locked.
			
				reset resets the time when a timer should fire.
				If used for an inactive timer, the timer will become active.
				Reports whether the timer was active and was stopped.
			
				stop stops the timer t. It may be on some other P, so we can't
				actually remove it from the timers heap. We can only mark it as stopped.
				It will be removed in due course by the P whose heap it is on.
				Reports whether the timer was stopped before it was run.
			(*timeTimer) trace(op string)
			(*timeTimer) trace1(op string)
			
				unlock updates t.astate and unlocks the timer.
			
				unlockAndRun unlocks and runs the timer t (which must be locked).
				If t is in a timer set (t.ts != nil), the caller must also have locked the timer set,
				and this call will temporarily unlock the timer set while running the timer function.
				unlockAndRun returns with t unlocked and t.ts (re-)locked.
			
				updateHeap updates t as directed by t.state, updating t.state
				and returning a bool indicating whether the state (and ts.heap[0].when) changed.
				The caller must hold t's lock, or the world can be stopped instead.
				The timer set t.ts must be non-nil and locked, t must be t.ts.heap[0], and updateHeap
				takes care of moving t within the timers heap to preserve the heap invariants.
				If ts == nil, then t must not be in a heap (or is in a heap that is
				temporarily not maintaining its invariant, such as during timers.adjust).
		
			
			func newTimer(when, period int64, f func(arg any, seq uintptr, delay int64), arg any, c *hchan) *timeTimer
		
			
			func resetTimer(t *timeTimer, when, period int64) bool
			func stopTimer(t *timeTimer) bool
	
		
			
			func concatstring2(buf *tmpBuf, a0, a1 string) string
			func concatstring3(buf *tmpBuf, a0, a1, a2 string) string
			func concatstring4(buf *tmpBuf, a0, a1, a2, a3 string) string
			func concatstring5(buf *tmpBuf, a0, a1, a2, a3, a4 string) string
			func concatstrings(buf *tmpBuf, a []string) string
			func rawstringtmp(buf *tmpBuf, l int) (s string, b []byte)
			func slicebytetostring(buf *tmpBuf, ptr *byte, n int) string
			func slicerunetostring(buf *tmpBuf, a []rune) string
			func stringtoslicebyte(buf *tmpBuf, s string) []byte
	
		
			
			done chan struct{}
			timer *wakeableSleep
		
			
			
				start starts a new traceAdvancer.
			
				stop stops a traceAdvancer and blocks until it exits.
		
			
			  var traceAdvancer
	
		traceArg is a simple wrapper type to help ensure that arguments passed
		to traces are well-formed.
		
			
			func traceCompressStackSize(size uintptr) traceArg
			func traceGoroutineStackID(base uintptr) traceArg
			func traceHeapObjectID(addr uintptr) traceArg
			func traceSpanID(s *mspan) traceArg
			func traceSpanTypeAndClass(s *mspan) traceArg
	
		traceBlockReason is an enumeration of reasons a goroutine might block.
		This is the interface the rest of the runtime uses to tell the
		tracer why a goroutine blocked. The tracer then propagates this information
		into the trace however it sees fit.
		
		Note that traceBlockReasons should not be compared, since reasons that are
		distinct by name may *not* be distinct by value.
		
			
			func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceReason traceBlockReason, traceskip int)
			func goparkunlock(lock *mutex, reason waitReason, traceReason traceBlockReason, traceskip int)
		
			
			const traceBlockChanRecv
			const traceBlockChanSend
			const traceBlockCondWait
			const traceBlockDebugCall
			const traceBlockForever
			const traceBlockGCMarkAssist
			const traceBlockGCSweep
			const traceBlockGCWeakToStrongWait
			const traceBlockGeneric
			const traceBlockNet
			const traceBlockPreempted
			const traceBlockSelect
			const traceBlockSleep
			const traceBlockSync
			const traceBlockSynctest
			const traceBlockSystemGoroutine
			const traceBlockUntilGCEnds
	
		traceBuf is per-M tracing buffer.
		
		TODO(mknyszek): Rename traceBuf to traceBatch, since they map 1:1 with event batches.
		
			
			
				// underlying buffer for traceBufHeader.buf
			traceBufHeader traceBufHeader
			
				// when we wrote the last event
			
				// position of batch length value
			
				// in trace.empty/full
			
				// next write offset in arr
		
			
			
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				byte appends v to buf.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				stringData appends s's data directly to buf.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				varint appends v to buf in little-endian-base-128 encoding.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				varintAt writes varint v at byte position pos in buf. This always
				consumes traceBytesPerNumber bytes. This is intended for when the caller
				needs to reserve space for a varint but can't populate it until later.
				Use varintReserve to reserve this space.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				varintReserve reserves enough space in buf to hold any varint.
				
				Space reserved this way can be filled in with the varintAt method.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
		
			
			func traceBufFlush(buf *traceBuf, gen uintptr)
			func unsafeTraceExpWriter(gen uintptr, buf *traceBuf, exp traceExperiment) traceWriter
			func unsafeTraceWriter(gen uintptr, buf *traceBuf) traceWriter
	
		traceBufHeader is per-P tracing buffer.
		
			
			
				// when we wrote the last event
			
				// position of batch length value
			
				// in trace.empty/full
			
				// next write offset in arr
	
		traceBufQueue is a FIFO of traceBufs.
		
			
			head *traceBuf
			tail *traceBuf
		
			
			(*traceBufQueue) empty() bool
			
				pop dequeues from the queue of buffers.
			
				push queues buf into queue of buffers.
	
		Event types in the trace, args are given in square brackets.
		
		Naming scheme:
		  - Time range event pairs have suffixes "Begin" and "End".
		  - "Start", "Stop", "Create", "Destroy", "Block", "Unblock"
		    are suffixes reserved for scheduling resources.
		
		NOTE: If you add an event type, make sure you also update all
		tables in this file!
		
			
			const traceEvCPUSample
			const traceEvCPUSamples
			const traceEvEventBatch
			const traceEvExperimentalBatch
			const traceEvFrequency
			const traceEvGCActive
			const traceEvGCBegin
			const traceEvGCEnd
			const traceEvGCMarkAssistActive
			const traceEvGCMarkAssistBegin
			const traceEvGCMarkAssistEnd
			const traceEvGCSweepActive
			const traceEvGCSweepBegin
			const traceEvGCSweepEnd
			const traceEvGoBlock
			const traceEvGoCreate
			const traceEvGoCreateBlocked
			const traceEvGoCreateSyscall
			const traceEvGoDestroy
			const traceEvGoDestroySyscall
			const traceEvGoLabel
			const traceEvGoroutineStack
			const traceEvGoroutineStackAlloc
			const traceEvGoroutineStackFree
			const traceEvGoStart
			const traceEvGoStatus
			const traceEvGoStatusStack
			const traceEvGoStop
			const traceEvGoSwitch
			const traceEvGoSwitchDestroy
			const traceEvGoSyscallBegin
			const traceEvGoSyscallEnd
			const traceEvGoSyscallEndBlocked
			const traceEvGoUnblock
			const traceEvHeapAlloc
			const traceEvHeapGoal
			const traceEvHeapObject
			const traceEvHeapObjectAlloc
			const traceEvHeapObjectFree
			const traceEvNone
			const traceEvProcsChange
			const traceEvProcStart
			const traceEvProcStatus
			const traceEvProcSteal
			const traceEvProcStop
			const traceEvSpan
			const traceEvSpanAlloc
			const traceEvSpanFree
			const traceEvStack
			const traceEvStacks
			const traceEvString
			const traceEvStrings
			const traceEvSTWBegin
			const traceEvSTWEnd
			const traceEvUserLog
			const traceEvUserRegionBegin
			const traceEvUserRegionEnd
			const traceEvUserTaskBegin
			const traceEvUserTaskEnd
	
		traceEventWriter is the high-level API for writing trace events.
		
		See the comment on traceWriter about style for more details as to why
		this type and its methods are structured the way they are.
		
			
			tl traceLocker
		
			
			
				event writes out a trace event.
	
		traceExperiment is an enumeration of the different kinds of experiments supported for tracing.
		
			
			func unsafeTraceExpWriter(gen uintptr, buf *traceBuf, exp traceExperiment) traceWriter
		
			
			const traceExperimentAllocFree
			const traceNoExperiment
			const traceNumExperiments
	
		
			PC uintptr
			
			fileID uint64
			funcID uint64
			line uint64
		
			
			func makeTraceFrame(gen uintptr, f Frame) traceFrame
			func makeTraceFrames(gen uintptr, pcs []uintptr) []traceFrame
	
		traceGoStatus is the status of a goroutine.
		
		They correspond directly to the various goroutine
		statuses.
		
			
			func goStatusToTraceGoStatus(status uint32, wr waitReason) traceGoStatus
		
			
			const traceGoBad
			const traceGoRunnable
			const traceGoRunning
			const traceGoSyscall
			const traceGoWaiting
	
		traceGoStopReason is an enumeration of reasons a goroutine might yield.
		
		Note that traceGoStopReasons should not be compared, since reasons that are
		distinct by name may *not* be distinct by value.
		
			
			const traceGoStopGeneric
			const traceGoStopGoSched
			const traceGoStopPreempted
	
		traceLocker represents an M writing trace events. While a traceLocker value
		is valid, the tracer observes all operations on the G/M/P or trace events being
		written as happening atomically.
		
			
			gen uintptr
			mp *m
		
			
				GCActive traces a GCActive event.
				
				Must be emitted by an actively running goroutine on an active P. This restriction can be changed
				easily and only depends on where it's currently called.
			
				GCDone traces a GCEnd event.
				
				Must be emitted by an actively running goroutine on an active P. This restriction can be changed
				easily and only depends on where it's currently called.
			
				GCMarkAssistDone emits a MarkAssistEnd event.
			
				GCMarkAssistStart emits a MarkAssistBegin event.
			
				GCStart traces a GCBegin event.
				
				Must be emitted by an actively running goroutine on an active P. This restriction can be changed
				easily and only depends on where it's currently called.
			
				GCSweepDone finishes tracing a sweep loop. If any memory was
				swept (i.e. traceGCSweepSpan emitted an event) then this will emit
				a GCSweepEnd event.
				
				Must be called with a valid P.
			
				GCSweepSpan traces the sweep of a single span. If this is
				the first span swept since traceGCSweepStart was called, this
				will emit a GCSweepBegin event.
				
				This may be called outside a traceGCSweepStart/traceGCSweepDone
				pair; however, it will not emit any trace events in this case.
				
				Must be called with a valid P.
			
				GCSweepStart prepares to trace a sweep loop. This does not
				emit any events until traceGCSweepSpan is called.
				
				GCSweepStart must be paired with traceGCSweepDone and there
				must be no preemption points between these two calls.
				
				Must be called with a valid P.
			
				GoCreate emits a GoCreate event.
			
				GoCreateSyscall indicates that a goroutine has transitioned from dead to GoSyscall.
				
				Unlike GoCreate, the caller must be running on gp.
				
				This occurs when C code calls into Go. On pthread platforms it occurs only when
				a C thread calls into Go code for the first time.
			
				GoDestroySyscall indicates that a goroutine has transitioned from GoSyscall to dead.
				
				Must not have a P.
				
				This occurs when Go code returns back to C. On pthread platforms it occurs only when
				the C thread is destroyed.
			
				GoEnd emits a GoDestroy event.
				
				TODO(mknyszek): Rename this to GoDestroy.
			
				GoPark emits a GoBlock event with the provided reason.
				
				TODO(mknyszek): Replace traceBlockReason with waitReason. It's silly
				that we have both, and waitReason is way more descriptive.
			
				GoPreempt emits a GoStop event with a GoPreempted reason.
			
				GoSched emits a GoStop event with a GoSched reason.
			
				GoStart emits a GoStart event.
				
				Must be called with a valid P.
			
				GoStop emits a GoStop event with the provided reason.
			
				GoSwitch emits a GoSwitch event. If destroy is true, the calling goroutine
				is simultaneously being destroyed.
			
				GoSysCall emits a GoSyscallBegin event.
				
				Must be called with a valid P.
			
				GoSysExit emits a GoSyscallEnd event, possibly along with a GoSyscallBlocked event
				if lostP is true.
				
				lostP must be true in all cases that a goroutine loses its P during a syscall.
				This means it's not sufficient to check if it has no P. In particular, it needs to be
				true in the following cases:
				- The goroutine lost its P, it ran some other code, and then got it back. It's now running with that P.
				- The goroutine lost its P and was unable to reacquire it, and is now running without a P.
				- The goroutine lost its P and acquired a different one, and is now running with that P.
			
				GoUnpark emits a GoUnblock event.
			
				Gomaxprocs emits a ProcsChange event.
			
				GoroutineStackAlloc records that a goroutine stack was newly allocated at address base with the provided size..
			
				GoroutineStackExists records that a goroutine stack already exists at address base with the provided size.
			
				GoroutineStackFree records that a goroutine stack at address base is about to be freed.
			
				HeapAlloc emits a HeapAlloc event.
			
				HeapGoal reads the current heap goal and emits a HeapGoal event.
			
				HeapObjectAlloc records that an object was newly allocated at addr with the provided type.
				The type is optional, and the size of the slot occupied the object is inferred from the
				span containing it.
			
				HeapObjectExists records that an object already exists at addr with the provided type.
				The type is optional, and the size of the slot occupied the object is inferred from the
				span containing it.
			
				HeapObjectFree records that an object at addr is about to be freed.
			
				ProcStart traces a ProcStart event.
				
				Must be called with a valid P.
			
				ProcSteal indicates that our current M stole a P from another M.
				
				inSyscall indicates that we're stealing the P from a syscall context.
				
				The caller must have ownership of pp.
			
				ProcStop traces a ProcStop event.
			
				STWDone traces a STWEnd event.
			
				STWStart traces a STWBegin event.
			
				SpanAlloc records an event indicating that the span has just been allocated.
			
				SpanExists records an event indicating that the span exists.
			
				SpanFree records an event indicating that the span is about to be freed.
			
			
				emitUnblockStatus emits a GoStatus GoWaiting event for a goroutine about to be
				unblocked to the trace writer.
			
				eventWriter creates a new traceEventWriter. It is the main entrypoint for writing trace events.
				
				Before creating the event writer, this method will emit a status for the current goroutine
				or proc if it exists, and if it hasn't had its status emitted yet. goStatus and procStatus indicate
				what the status of goroutine or P should be immediately *before* the events that are about to
				be written using the eventWriter (if they exist). No status will be written if there's no active
				goroutine or P.
				
				Callers can elect to pass a constant value here if the status is clear (e.g. a goroutine must have
				been Runnable before a GoStart). Otherwise, callers can query the status of either the goroutine
				or P and pass the appropriate status.
				
				In this case, the default status should be traceGoBad or traceProcBad to help identify bugs sooner.
			
				expWriter returns a traceWriter that writes into the current M's stream for
				the given experiment.
			
				ok returns true if the traceLocker is valid (i.e. tracing is enabled).
				
				nosplit because it's called on the syscall path when stack movement is forbidden.
			
				rtype returns a traceArg representing typ which may be passed to write.
			
				stack takes a stack trace skipping the provided number of frames.
				It then returns a traceArg representing that stack which may be
				passed to write.
			
				startPC takes a start PC for a goroutine and produces a unique
				stack ID for it.
				
				It then returns a traceArg representing that stack which may be
				passed to write.
			
				string returns a traceArg representing s which may be passed to write.
				The string is assumed to be relatively short and popular, so it may be
				stored for a while in the string dictionary.
			
				uniqueString returns a traceArg representing s which may be passed to write.
				The string is assumed to be unique or long, so it will be written out to
				the trace eagerly.
			
				writer returns an a traceWriter that writes into the current M's stream.
				
				Once this is called, the caller must guard against stack growth until
				end is called on it. Therefore, it's highly recommended to use this
				API in a "fluent" style, for example tl.writer().event(...).end().
				Better yet, callers just looking to write events should use eventWriter
				when possible, which is a much safer wrapper around this function.
				
				nosplit to allow for safe reentrant tracing from stack growth paths.
		
			
			func traceAcquire() traceLocker
			func traceAcquireEnabled() traceLocker
		
			
			func exitsyscallfast_reacquired(trace traceLocker)
			func traceRelease(tl traceLocker)
	
		
			
			mem traceRegionAlloc
			
				// *traceMapNode (can't use generics because it's notinheap)
			seq atomic.Uint64
		
			
			(*traceMap) newTraceMapNode(data unsafe.Pointer, size, hash uintptr, id uint64) *traceMapNode
			
				put inserts the data into the table.
				
				It's always safe for callers to noescape data because put copies its bytes.
				
				Returns a unique ID for the data and whether this is the first time
				the data has been added to the map.
			
				reset drops all allocated memory from the table and resets it.
				
				The caller must ensure that there are no put operations executing concurrently
				with this function.
			
				stealID steals an ID from the table, ensuring that it will not
				appear in the table anymore.
	
		traceMapNode is an implementation of a lock-free append-only hash-trie
		(a trie of the hash bits).
		
		Key features:
		  - 4-ary trie. Child nodes are indexed by the upper 2 (remaining) bits of the hash.
		    For example, top level uses bits [63:62], next level uses [61:60] and so on.
		  - New nodes are placed at the first empty level encountered.
		  - When the first child is added to a node, the existing value is not moved into a child.
		    This means that you must check the key at each level, not just at the leaf.
		  - No deletion or rebalancing.
		  - Intentionally devolves into a linked list on hash collisions (the hash bits will all
		    get shifted out during iteration, and new nodes will just be appended to the 0th child).
		
			
			
				// *traceMapNode (can't use generics because it's notinheap)
			data []byte
			hash uintptr
			id uint64
		
			
			func dumpStacksRec(node *traceMapNode, w traceWriter, stackBuf []uintptr) traceWriter
			func dumpTypesRec(node *traceMapNode, w traceWriter) traceWriter
	
		traceProcStatus is the status of a P.
		
		They mostly correspond to the various P statuses.
		
			
			const traceProcBad
			const traceProcIdle
			const traceProcRunning
			const traceProcSyscall
			const traceProcSyscallAbandoned
	
		traceRegionAlloc is a thread-safe region allocator.
		It holds a linked list of traceRegionAllocBlock.
		
			
			
				// *traceRegionAllocBlock
			
				// For checking invariants.
			full *traceRegionAllocBlock
			lock mutex
		
			
			
				alloc allocates n-byte block. The block is always aligned to 8 bytes, regardless of platform.
			
				drop frees all previously allocated memory and resets the allocator.
				
				drop is not safe to call concurrently with other calls to drop or with calls to alloc. The caller
				must ensure that it is not possible for anything else to be using the same structure.
	
		traceRegionAllocBlock is a block in traceRegionAlloc.
		
		traceRegionAllocBlock is allocated from non-GC'd memory, so it must not
		contain heap pointers. Writes to pointers to traceRegionAllocBlocks do
		not need write barriers.
		
			
			data [65520]byte
			traceRegionAllocBlockHeader traceRegionAllocBlockHeader
			traceRegionAllocBlockHeader.next *traceRegionAllocBlock
			traceRegionAllocBlockHeader.off atomic.Uintptr
	
		traceSchedResourceState is shared state for scheduling resources (i.e. fields common to
		both Gs and Ps).
		
			
			
				seq is the sequence counter for this scheduling resource's events.
				The purpose of the sequence counter is to establish a partial order between
				events that don't obviously happen serially (same M) in the stream ofevents.
				
				There are two of these so that we can reset the counter on each generation.
				This saves space in the resulting trace by keeping the counter small and allows
				GoStatus and GoCreate events to omit a sequence number (implicitly 0).
			
				statusTraced indicates whether a status event was traced for this resource
				a particular generation.
				
				There are 3 of these because when transitioning across generations, traceAdvance
				needs to be able to reliably observe whether a status was traced for the previous
				generation, while we need to clear the value for the next generation.
		
			
			
				acquireStatus acquires the right to emit a Status event for the scheduling resource.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				nextSeq returns the next sequence number for the resource.
			
				readyNextGen readies r for the generation following gen.
			
				setStatusTraced indicates that the resource's status was already traced, for example
				when a goroutine is created.
			
				statusWasTraced returns true if the sched resource's status was already acquired for tracing.
	
		traceStackTable maps stack traces (arrays of PC's) to unique uint32 ids.
		It is lock-free for reading.
		
			
			tab traceMap
		
			
			
				dump writes all previously cached stacks to trace buffers,
				releases all memory and resets state. It must only be called once the caller
				can guarantee that there are no more writers to the table.
			
				put returns a unique id for the stack trace pcs and caches it in the table,
				if it sees the trace for the first time.
	
		
			
			
				// init tracing activation status
			
				// heap allocations
			
				// heap allocated bytes
			
				// init goroutine id
		
			
			  var inittrace
	
		traceStringTable is map of string -> unique ID that also manages
		writing strings out into the trace.
		
			
			
				// string batches to write out to the trace.
			
				lock protects buf.
			
				tab is a mapping of string -> unique ID.
		
			
			
				emit emits a string and creates an ID for it, but doesn't add it to the table. Returns the ID.
			
				put adds a string to the table, emits it, and returns a unique ID for it.
			
				reset clears the string table and flushes any buffers it has.
				
				Must be called only once the caller is certain nothing else will be
				added to this table.
			
				writeString writes the string to t.buf.
				
				Must run on the systemstack because it acquires t.lock.
	
		traceTime represents a timestamp for the trace.
		
			
			func traceClockNow() traceTime
	
		traceTypeTable maps stack traces (arrays of PC's) to unique uint32 ids.
		It is lock-free for reading.
		
			
			tab traceMap
		
			
			
				dump writes all previously cached types to trace buffers and
				releases all memory and resets state. It must only be called once the caller
				can guarantee that there are no more writers to the table.
			
				put returns a unique id for the type typ and caches it in the table,
				if it's seeing it for the first time.
				
				N.B. typ must be kept alive forever for this to work correctly.
	
		traceWriter is the interface for writing all trace data.
		
		This type is passed around as a value, and all of its methods return
		a new traceWriter. This allows for chaining together calls in a fluent-style
		API. This is partly stylistic, and very slightly for performance, since
		the compiler can destructure this value and pass it between calls as
		just regular arguments. However, this style is not load-bearing, and
		we can change it if it's deemed too error-prone.
		
			
			exp traceExperiment
			traceBuf *traceBuf
			
				// underlying buffer for traceBufHeader.buf
			traceBuf.traceBufHeader traceBufHeader
			
				// when we wrote the last event
			
				// position of batch length value
			
				// in trace.empty/full
			
				// next write offset in arr
			traceLocker traceLocker
			traceLocker.gen uintptr
			traceLocker.mp *m
		
			
				GCActive traces a GCActive event.
				
				Must be emitted by an actively running goroutine on an active P. This restriction can be changed
				easily and only depends on where it's currently called.
			
				GCDone traces a GCEnd event.
				
				Must be emitted by an actively running goroutine on an active P. This restriction can be changed
				easily and only depends on where it's currently called.
			
				GCMarkAssistDone emits a MarkAssistEnd event.
			
				GCMarkAssistStart emits a MarkAssistBegin event.
			
				GCStart traces a GCBegin event.
				
				Must be emitted by an actively running goroutine on an active P. This restriction can be changed
				easily and only depends on where it's currently called.
			
				GCSweepDone finishes tracing a sweep loop. If any memory was
				swept (i.e. traceGCSweepSpan emitted an event) then this will emit
				a GCSweepEnd event.
				
				Must be called with a valid P.
			
				GCSweepSpan traces the sweep of a single span. If this is
				the first span swept since traceGCSweepStart was called, this
				will emit a GCSweepBegin event.
				
				This may be called outside a traceGCSweepStart/traceGCSweepDone
				pair; however, it will not emit any trace events in this case.
				
				Must be called with a valid P.
			
				GCSweepStart prepares to trace a sweep loop. This does not
				emit any events until traceGCSweepSpan is called.
				
				GCSweepStart must be paired with traceGCSweepDone and there
				must be no preemption points between these two calls.
				
				Must be called with a valid P.
			
				GoCreate emits a GoCreate event.
			
				GoCreateSyscall indicates that a goroutine has transitioned from dead to GoSyscall.
				
				Unlike GoCreate, the caller must be running on gp.
				
				This occurs when C code calls into Go. On pthread platforms it occurs only when
				a C thread calls into Go code for the first time.
			
				GoDestroySyscall indicates that a goroutine has transitioned from GoSyscall to dead.
				
				Must not have a P.
				
				This occurs when Go code returns back to C. On pthread platforms it occurs only when
				the C thread is destroyed.
			
				GoEnd emits a GoDestroy event.
				
				TODO(mknyszek): Rename this to GoDestroy.
			
				GoPark emits a GoBlock event with the provided reason.
				
				TODO(mknyszek): Replace traceBlockReason with waitReason. It's silly
				that we have both, and waitReason is way more descriptive.
			
				GoPreempt emits a GoStop event with a GoPreempted reason.
			
				GoSched emits a GoStop event with a GoSched reason.
			
				GoStart emits a GoStart event.
				
				Must be called with a valid P.
			
				GoStop emits a GoStop event with the provided reason.
			
				GoSwitch emits a GoSwitch event. If destroy is true, the calling goroutine
				is simultaneously being destroyed.
			
				GoSysCall emits a GoSyscallBegin event.
				
				Must be called with a valid P.
			
				GoSysExit emits a GoSyscallEnd event, possibly along with a GoSyscallBlocked event
				if lostP is true.
				
				lostP must be true in all cases that a goroutine loses its P during a syscall.
				This means it's not sufficient to check if it has no P. In particular, it needs to be
				true in the following cases:
				- The goroutine lost its P, it ran some other code, and then got it back. It's now running with that P.
				- The goroutine lost its P and was unable to reacquire it, and is now running without a P.
				- The goroutine lost its P and acquired a different one, and is now running with that P.
			
				GoUnpark emits a GoUnblock event.
			
				Gomaxprocs emits a ProcsChange event.
			
				GoroutineStackAlloc records that a goroutine stack was newly allocated at address base with the provided size..
			
				GoroutineStackExists records that a goroutine stack already exists at address base with the provided size.
			
				GoroutineStackFree records that a goroutine stack at address base is about to be freed.
			
				HeapAlloc emits a HeapAlloc event.
			
				HeapGoal reads the current heap goal and emits a HeapGoal event.
			
				HeapObjectAlloc records that an object was newly allocated at addr with the provided type.
				The type is optional, and the size of the slot occupied the object is inferred from the
				span containing it.
			
				HeapObjectExists records that an object already exists at addr with the provided type.
				The type is optional, and the size of the slot occupied the object is inferred from the
				span containing it.
			
				HeapObjectFree records that an object at addr is about to be freed.
			
				ProcStart traces a ProcStart event.
				
				Must be called with a valid P.
			
				ProcSteal indicates that our current M stole a P from another M.
				
				inSyscall indicates that we're stealing the P from a syscall context.
				
				The caller must have ownership of pp.
			
				ProcStop traces a ProcStop event.
			
				STWDone traces a STWEnd event.
			
				STWStart traces a STWBegin event.
			
				SpanAlloc records an event indicating that the span has just been allocated.
			
				SpanExists records an event indicating that the span exists.
			
				SpanFree records an event indicating that the span is about to be freed.
			
			
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				byte appends v to buf.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				emitUnblockStatus emits a GoStatus GoWaiting event for a goroutine about to be
				unblocked to the trace writer.
			
				end writes the buffer back into the m.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				ensure makes sure that at least maxSize bytes are available to write.
				
				Returns whether the buffer was flushed.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				event writes out the bytes of an event into the event stream.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				eventWriter creates a new traceEventWriter. It is the main entrypoint for writing trace events.
				
				Before creating the event writer, this method will emit a status for the current goroutine
				or proc if it exists, and if it hasn't had its status emitted yet. goStatus and procStatus indicate
				what the status of goroutine or P should be immediately *before* the events that are about to
				be written using the eventWriter (if they exist). No status will be written if there's no active
				goroutine or P.
				
				Callers can elect to pass a constant value here if the status is clear (e.g. a goroutine must have
				been Runnable before a GoStart). Otherwise, callers can query the status of either the goroutine
				or P and pass the appropriate status.
				
				In this case, the default status should be traceGoBad or traceProcBad to help identify bugs sooner.
			
				expWriter returns a traceWriter that writes into the current M's stream for
				the given experiment.
			
				flush puts w.traceBuf on the queue of full buffers.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				ok returns true if the traceLocker is valid (i.e. tracing is enabled).
				
				nosplit because it's called on the syscall path when stack movement is forbidden.
			
				refill puts w.traceBuf on the queue of full buffers and refresh's w's buffer.
			
				rtype returns a traceArg representing typ which may be passed to write.
			
				stack takes a stack trace skipping the provided number of frames.
				It then returns a traceArg representing that stack which may be
				passed to write.
			
				startPC takes a start PC for a goroutine and produces a unique
				stack ID for it.
				
				It then returns a traceArg representing that stack which may be
				passed to write.
			
				string returns a traceArg representing s which may be passed to write.
				The string is assumed to be relatively short and popular, so it may be
				stored for a while in the string dictionary.
			
				stringData appends s's data directly to buf.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				uniqueString returns a traceArg representing s which may be passed to write.
				The string is assumed to be unique or long, so it will be written out to
				the trace eagerly.
			
				varint appends v to buf in little-endian-base-128 encoding.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				varintAt writes varint v at byte position pos in buf. This always
				consumes traceBytesPerNumber bytes. This is intended for when the caller
				needs to reserve space for a varint but can't populate it until later.
				Use varintReserve to reserve this space.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				varintReserve reserves enough space in buf to hold any varint.
				
				Space reserved this way can be filled in with the varintAt method.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				writeGoStatus emits a GoStatus event as well as any active ranges on the goroutine.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				writeProcStatus emits a ProcStatus event with all the provided information.
				
				The caller must have taken ownership of a P's status writing, and the P must be
				prevented from transitioning.
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				writeProcStatusForP emits a ProcStatus event for the provided p based on its status.
				
				The caller must fully own pp and it must be prevented from transitioning (e.g. this can be
				called by a forEachP callback or from a STW).
				
				nosplit because it's part of writing an event for an M, which must not
				have any stack growth.
			
				writer returns an a traceWriter that writes into the current M's stream.
				
				Once this is called, the caller must guard against stack growth until
				end is called on it. Therefore, it's highly recommended to use this
				API in a "fluent" style, for example tl.writer().event(...).end().
				Better yet, callers just looking to write events should use eventWriter
				when possible, which is a much safer wrapper around this function.
				
				nosplit to allow for safe reentrant tracing from stack growth paths.
		
			
			func dumpStacksRec(node *traceMapNode, w traceWriter, stackBuf []uintptr) traceWriter
			func dumpTypesRec(node *traceMapNode, w traceWriter) traceWriter
			func unsafeTraceExpWriter(gen uintptr, buf *traceBuf, exp traceExperiment) traceWriter
			func unsafeTraceWriter(gen uintptr, buf *traceBuf) traceWriter
		
			
			func dumpStacksRec(node *traceMapNode, w traceWriter, stackBuf []uintptr) traceWriter
			func dumpTypesRec(node *traceMapNode, w traceWriter) traceWriter
	
		typePointers is an iterator over the pointers in a heap object.
		
		Iteration through this type implements the tiling algorithm described at the
		top of this file.
		
			
			
				addr is the address the iterator is currently working from and describes
				the address of the first word referenced by mask.
			
				elem is the address of the current array element of type typ being iterated over.
				Objects that are not arrays are treated as single-element arrays, in which case
				this value does not change.
			
				mask is a bitmask where each bit corresponds to pointer-words after addr.
				Bit 0 is the pointer-word at addr, Bit 1 is the next word, and so on.
				If a bit is 1, then there is a pointer at that word.
				nextFast and next mask out bits in this mask as their pointers are processed.
			
				typ is a pointer to the type information for the heap object's type.
				This may be nil if the object is in a span where heapBitsInSpan(span.elemsize) is true.
		
			
			
				fastForward moves the iterator forward by n bytes. n must be a multiple
				of goarch.PtrSize. limit must be the same limit passed to next for this
				iterator.
				
				nosplit because it is used during write barriers and must not be preempted.
			
				next advances the pointers iterator, returning the updated iterator and
				the address of the next pointer.
				
				limit must be the same each time it is passed to next.
				
				nosplit because it is used during write barriers and must not be preempted.
			
				nextFast is the fast path of next. nextFast is written to be inlineable and,
				as the name implies, fast.
				
				Callers that are performance-critical should iterate using the following
				pattern:
				
					for {
						var addr uintptr
						if tp, addr = tp.nextFast(); addr == 0 {
							if tp, addr = tp.next(limit); addr == 0 {
								break
							}
						}
						// Use addr.
						...
					}
				
				nosplit because it is used during write barriers and must not be preempted.
		
			
			func dumpTypePointers(tp typePointers)
	
		
			
			__fpregs_mem fpstate
			uc_flags uint64
			uc_link *ucontext
			uc_mcontext mcontext
			uc_sigmask usigset
			uc_stack stackt
	
		The specialized convTx routines need a type descriptor to use when calling mallocgc.
		We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
		However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
		so we use named types here.
		We then construct interface values of these types,
		and then extract the type word to use as needed.
	
		The specialized convTx routines need a type descriptor to use when calling mallocgc.
		We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
		However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
		so we use named types here.
		We then construct interface values of these types,
		and then extract the type word to use as needed.
	
		The specialized convTx routines need a type descriptor to use when calling mallocgc.
		We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
		However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
		so we use named types here.
		We then construct interface values of these types,
		and then extract the type word to use as needed.
	 type uncommontype = abi.UncommonType (struct)	
		An unwinder iterates the physical stack frames of a Go sack.
		
		Typical use of an unwinder looks like:
		
			var u unwinder
			for u.init(gp, 0); u.valid(); u.next() {
				// ... use frame info in u ...
			}
		
		Implementation note: This is carefully structured to be pointer-free because
		tracebacks happen in places that disallow write barriers (e.g., signals).
		Even if this is stack-allocated, its pointer-receiver methods don't know that
		their receiver is on the stack, so they still emit write barriers. Here we
		address that by carefully avoiding any pointers in this type. Another
		approach would be to split this into a mutable part that's passed by pointer
		but contains no pointers itself and an immutable part that's passed and
		returned by value and can contain pointers. We could potentially hide that
		we're doing that in trivial methods that are inlined into the caller that has
		the stack allocation, but that's fragile.
		
			
			
				calleeFuncID is the function ID of the caller of the current
				frame.
			
				cgoCtxt is the index into g.cgoCtxt of the next frame on the cgo stack.
				The cgo stack is unwound in tandem with the Go stack as we find marker frames.
			
				flags are the flags to this unwind. Some of these are updated as we
				unwind (see the flags documentation).
			
				frame is the current physical stack frame, or all 0s if
				there is no frame.
			
				g is the G who's stack is being unwound. If the
				unwindJumpStack flag is set and the unwinder jumps stacks,
				this will be different from the initial G.
		
			
			
				cgoCallers populates pcBuf with the cgo callers of the current frame using
				the registered cgo unwinder. It returns the number of PCs written to pcBuf.
				If the current frame is not a cgo frame or if there's no registered cgo
				unwinder, it returns 0.
			
				finishInternal is an unwinder-internal helper called after the stack has been
				exhausted. It sets the unwinder to an invalid state and checks that it
				successfully unwound the entire stack.
			
				init initializes u to start unwinding gp's stack and positions the
				iterator on gp's innermost frame. gp must not be the current G.
				
				A single unwinder can be reused for multiple unwinds.
			(*unwinder) initAt(pc0, sp0, lr0 uintptr, gp *g, flags unwindFlags)
			(*unwinder) next()
			
				resolveInternal fills in u.frame based on u.frame.fn, pc, and sp.
				
				innermost indicates that this is the first resolve on this stack. If
				innermost is set, isSyscall indicates that the PC/SP was retrieved from
				gp.syscall*; this is otherwise ignored.
				
				On entry, u.frame contains:
				  - fn is the running function.
				  - pc is the PC in the running function.
				  - sp is the stack pointer at that program counter.
				  - For the innermost frame on LR machines, lr is the program counter that called fn.
				
				On return, u.frame contains:
				  - fp is the stack pointer of the caller.
				  - lr is the program counter that called fn.
				  - varp, argp, and continpc are populated for the current frame.
				
				If fn is a stack-jumping function, resolveInternal can change the entire
				frame state to follow that stack jump.
				
				This is internal to unwinder.
			
				symPC returns the PC that should be used for symbolizing the current frame.
				Specifically, this is the PC of the last instruction executed in this frame.
				
				If this frame did a normal call, then frame.pc is a return PC, so this will
				return frame.pc-1, which points into the CALL instruction. If the frame was
				interrupted by a signal (e.g., profiler, segv, etc) then frame.pc is for the
				trapped instruction, so this returns frame.pc. See issue #34123. Finally,
				frame.pc can be at function entry when the frame is initialized without
				actually running code, like in runtime.mstart, in which case this returns
				frame.pc because that's the best we can do.
			(*unwinder) valid() bool
		
			
			func traceback2(u *unwinder, showRuntime bool, skip, max int) (n, lastN int)
			func tracebackPCs(u *unwinder, skip int, pcBuf []uintptr) int
	
		unwindFlags control the behavior of various unwinders.
		
			
			func traceback1(pc, sp, lr uintptr, gp *g, flags unwindFlags)
		
			
			const unwindJumpStack
			const unwindPrintErrors
			const unwindSilentErrors
			const unwindTrap
	
		
			
			
				active is the user arena chunk we're currently allocating into.
			
				defunct is true if free has been called on this arena.
				
				This is just a best-effort way to discover a concurrent allocation
				and free. Also used to detect a double-free.
			
				fullList is a list of full chunks that have not enough free memory left, and
				that we'll free once this user arena is freed.
				
				Can't use mSpanList here because it's not-in-heap.
			
				refs is a set of references to the arena chunks so that they're kept alive.
				
				The last reference in the list always refers to active, while the rest of
				them correspond to fullList. Specifically, the head of fullList is the
				second-to-last one, fullList.next is the third-to-last, and so on.
				
				In other words, every time a new chunk becomes active, its appended to this
				list.
		
			
			
				alloc reserves space in the current chunk or calls refill and reserves space
				in a new chunk. If cap is negative, the type will be taken literally, otherwise
				it will be considered as an element type for a slice backing store with capacity
				cap.
			
				free returns the userArena's chunks back to mheap and marks it as defunct.
				
				Must be called at most once for any given arena.
				
				This operation is not safe to call concurrently with other operations on the
				same arena.
			
				new allocates a new object of the provided type into the arena, and returns
				its pointer.
				
				This operation is not safe to call concurrently with other operations on the
				same arena.
			
				refill inserts the current arena chunk onto the full list and obtains a new
				one, either from the partial list or allocating a new one, both from mheap.
			
				slice allocates a new slice backing store. slice must be a pointer to a slice
				(i.e. *[]T), because userArenaSlice will update the slice directly.
				
				cap determines the capacity of the slice backing store and must be non-negative.
				
				This operation is not safe to call concurrently with other operations on the
				same arena.
		
			
			func newUserArena() *userArena
	
		
			
			bucket []uint32
			chain []uint32
			isGNUHash bool
			
				Load information
			
				// loadAddr - recorded vaddr
			symOff uint32
			symstrings *[1125899906842623]byte
			
				Symbol table
			valid bool
			verdef *elfVerdef
			
				Version table
		
			
			func vdsoFindVersion(info *vdsoInfo, ver *vdsoVersionKey) int32
			func vdsoInitFromSysinfoEhdr(info *vdsoInfo, hdr *elfEhdr)
			func vdsoParseSymbols(info *vdsoInfo, version int32)
	
		
			
			verHash uint32
			version string
		
			
			func vdsoFindVersion(info *vdsoInfo, ver *vdsoVersionKey) int32
		
			
			  var vdsoLinuxVersion
	
		
			
			first *sudog
			last *sudog
		
			
			(*waitq) dequeue() *sudog
			(*waitq) dequeueSudoG(sgp *sudog)
			(*waitq) enqueue(sgp *sudog)
	
		A waitReason explains why a goroutine has been stopped.
		See gopark. Do not re-use waitReasons, add new ones.
		
			( waitReason) String() string
			
			( waitReason) isIdleInSynctest() bool
			( waitReason) isMutexWait() bool
			( waitReason) isWaitingForSuspendG() bool
		
			 waitReason : expvar.Var
			 waitReason : fmt.Stringer
			
			 waitReason : stringer
			 waitReason : context.stringer
		
			
			func casGToWaiting(gp *g, old uint32, reason waitReason)
			func casGToWaitingForSuspendG(gp *g, old uint32, reason waitReason)
			func forEachP(reason waitReason, fn func(*p))
			func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceReason traceBlockReason, traceskip int)
			func goparkunlock(lock *mutex, reason waitReason, traceReason traceBlockReason, traceskip int)
			func goStatusToTraceGoStatus(status uint32, wr waitReason) traceGoStatus
			func newproc1(fn *funcval, callergp *g, callerpc uintptr, parked bool, waitreason waitReason) *g
			func semacquire1(addr *uint32, lifo bool, profile semaProfileFlags, skipframes int, reason waitReason)
		
			
			const waitReasonChanReceive
			const waitReasonChanReceiveNilChan
			const waitReasonChanSend
			const waitReasonChanSendNilChan
			const waitReasonCoroutine
			const waitReasonDebugCall
			const waitReasonDumpingHeap
			const waitReasonFinalizerWait
			const waitReasonFlushProcCaches
			const waitReasonForceGCIdle
			const waitReasonGarbageCollection
			const waitReasonGarbageCollectionScan
			const waitReasonGCAssistMarking
			const waitReasonGCAssistWait
			const waitReasonGCMarkTermination
			const waitReasonGCScavengeWait
			const waitReasonGCSweepWait
			const waitReasonGCWeakToStrongWait
			const waitReasonGCWorkerActive
			const waitReasonGCWorkerIdle
			const waitReasonIOWait
			const waitReasonPageTraceFlush
			const waitReasonPanicWait
			const waitReasonPreempted
			const waitReasonSelect
			const waitReasonSelectNoCases
			const waitReasonSemacquire
			const waitReasonSleep
			const waitReasonStoppingTheWorld
			const waitReasonSyncCondWait
			const waitReasonSyncMutexLock
			const waitReasonSyncRWMutexLock
			const waitReasonSyncRWMutexRLock
			const waitReasonSynctestChanReceive
			const waitReasonSynctestChanSend
			const waitReasonSynctestRun
			const waitReasonSynctestSelect
			const waitReasonSynctestWait
			const waitReasonSyncWaitGroupWait
			const waitReasonTraceGoroutineStatus
			const waitReasonTraceProcStatus
			const waitReasonTraceReaderBlocked
			const waitReasonWaitForGCCycle
			const waitReasonZero
	
		wakeableSleep manages a wakeable goroutine sleep.
		
		Users of this type must call init before first use and
		close to free up resources. Once close is called, init
		must be called before another use.
		
			
			
				lock protects access to wakeup, but not send/recv on it.
			timer *timer
			wakeup chan struct{}
		
			
			
				close wakes any goroutine sleeping on the timer and prevents
				further sleeping on it.
				
				Once close is called, the wakeableSleep must no longer be used.
				
				It must only be called once no goroutine is sleeping on the
				timer *and* nothing else will call wake concurrently.
			
				sleep sleeps for the provided duration in nanoseconds or until
				another goroutine calls wake.
				
				Must not be called by more than one goroutine at a time and
				must not be called concurrently with close.
			
				wake awakens any goroutine sleeping on the timer.
				
				Safe for concurrent use with all other methods.
		
			
			func newWakeableSleep() *wakeableSleep
	
		wbBuf is a per-P buffer of pointers queued by the write barrier.
		This buffer is flushed to the GC workbufs when it fills up and on
		various GC transitions.
		
		This is closely related to a "sequential store buffer" (SSB),
		except that SSBs are usually used for maintaining remembered sets,
		while this is used for marking.
		
			
			
				buf stores a series of pointers to execute write barriers on.
			
				end points to just past the end of buf. It must not be a
				pointer type because it points past the end of buf and must
				be updated without write barriers.
			
				next points to the next slot in buf. It must not be a
				pointer type because it can point past the end of buf and
				must be updated without write barriers.
				
				This is a pointer rather than an index to optimize the
				write barrier assembly.
		
			
			
				discard resets b's next pointer, but not its end pointer.
				
				This must be nosplit because it's called by wbBufFlush.
			
				empty reports whether b contains no pointers.
			
				getX returns space in the write barrier buffer to store X pointers.
				getX will flush the buffer if necessary. Callers should use this as:
				
					buf := &getg().m.p.ptr().wbBuf
					p := buf.get2()
					p[0], p[1] = old, new
					... actual memory write ...
				
				The caller must ensure there are no preemption points during the
				above sequence. There must be no preemption points while buf is in
				use because it is a per-P resource. There must be no preemption
				points between the buffer put and the write to memory because this
				could allow a GC phase change, which could result in missed write
				barriers.
				
				getX must be nowritebarrierrec to because write barriers here would
				corrupt the write barrier buffer. It (and everything it calls, if
				it called anything) has to be nosplit to avoid scheduling on to a
				different P and a different buffer.
			(*wbBuf) get2() *[2]uintptr
			
				reset empties b by resetting its next and end pointers.
	
		winlibcall is not implemented on non-Windows systems,
		but it is used in non-OS-specific parts of the runtime.
		Define it as an empty struct to avoid wasting stack space.
	
		
			
			
				account for the above fields
			workbufhdr workbufhdr
			workbufhdr.nobj int
			
				// must be first
		
			
			(*workbuf) checkempty()
			(*workbuf) checknonempty()
		
			
			func getempty() *workbuf
			func handoff(b *workbuf) *workbuf
			func trygetfull() *workbuf
		
			
			func handoff(b *workbuf) *workbuf
			func putempty(b *workbuf)
			func putfull(b *workbuf)
	
		
			
				// GC assists
			
				// GC dedicated mark workers + pauses
			
				// GC idle mark workers
			
				// GC pauses (all GOMAXPROCS, even if just 1 is running)
			cpuStats.GCTotalTime int64
			
				// Time Ps spent in _Pidle.
			
				// background scavenger
			
				// scavenge assists
			cpuStats.ScavengeTotalTime int64
			
				// GOMAXPROCS * (monotonic wall clock time elapsed)
			
				// Time Ps spent in _Prunning or _Psyscall that's not any of the above.
			
			
				assistQueue is a queue of assists that are blocked because
				there was neither enough credit to steal or enough work to
				do.
			
				Base indexes of each root type. Set by gcMarkRootPrepare.
			
				Base indexes of each root type. Set by gcMarkRootPrepare.
			
				Base indexes of each root type. Set by gcMarkRootPrepare.
			
				Base indexes of each root type. Set by gcMarkRootPrepare.
			
				Base indexes of each root type. Set by gcMarkRootPrepare.
			
				// cas to 1 when at a background mark completion point
			
				bytesMarked is the number of bytes marked this cycle. This
				includes bytes blackened in scanned objects, noscan objects
				that go straight to black, objects allocated as black during
				the cycle, and permagrey objects scanned by markroot during
				the concurrent scan phase.
				
				This is updated atomically during the cycle. Updates may be batched
				arbitrarily, since the value is only read at the end of the cycle.
				
				Because of benign races during marking, this number may not
				be the exact number of marked bytes, but it should be very
				close.
				
				Put this field here because it needs 64-bit atomic access
				(and thus 8-byte alignment even on 32-bit architectures).
			
				Cumulative estimated CPU usage.
			
				cycles is the number of completed GC cycles, where a GC
				cycle is sweep termination, mark, mark termination, and
				sweep. This differs from memstats.numgc, which is
				incremented at mark termination.
			
				// lock-free list of empty blocks workbuf
			
				// lock-free list of full blocks workbuf
			
				debug.gctrace heap sizes for this cycle.
			
				debug.gctrace heap sizes for this cycle.
			
				debug.gctrace heap sizes for this cycle.
			
				initialHeapLive is the value of gcController.heapLive at the
				beginning of this GC cycle.
			
				markDoneSema protects transitions from mark to mark termination.
			
				// number of markroot jobs
			
				// next markroot job
			
				Timing/utilization stats for this cycle.
			
				mode is the concurrency mode of the current GC cycle.
			
				Number of roots of various root types. Set by gcMarkRootPrepare.
				
				nStackRoots == len(stackRoots), but we have nStackRoots for
				consistency.
			
				Number of roots of various root types. Set by gcMarkRootPrepare.
				
				nStackRoots == len(stackRoots), but we have nStackRoots for
				consistency.
			
				Number of roots of various root types. Set by gcMarkRootPrepare.
				
				nStackRoots == len(stackRoots), but we have nStackRoots for
				consistency.
			
				Number of roots of various root types. Set by gcMarkRootPrepare.
				
				nStackRoots == len(stackRoots), but we have nStackRoots for
				consistency.
			nproc uint32
			nwait uint32
			
				pauseNS is the total STW time this cycle, measured as the time between
				when stopping began (just before trying to stop Ps) and just after the
				world started again.
			
				stackRoots is a snapshot of all of the Gs that existed
				before the beginning of concurrent marking. The backing
				store of this must not be modified because it might be
				shared with allgs.
			
				Each type of GC state transition is protected by a lock.
				Since multiple threads can simultaneously detect the state
				transition condition, any thread that detects a transition
				condition must acquire the appropriate transition lock,
				re-check the transition condition and return if it no
				longer holds or perform the transition if it does.
				Likewise, any transition must invalidate the transition
				condition before releasing the lock. This ensures that each
				transition is performed by exactly one thread and threads
				that need the transition to happen block until it has
				happened.
				
				startSema protects the transition from "off" to mark or
				mark termination.
			
				strongFromWeak controls how the GC interacts with weak->strong
				pointer conversions.
			
				Timing/utilization stats for this cycle.
			
				sweepWaiters is a list of blocked goroutines to wake when
				we transition from mark termination to sweep.
			
				// nanotime() of phase start
			
				// nanotime() of phase start
			
				// nanotime() of phase start
			
				// nanotime() of phase start
			tstart int64
			
				userForced indicates the current GC cycle was forced by an
				explicit user call.
			wbufSpans struct{lock mutex; free mSpanList; busy mSpanList}
		
			
			
				accumulate takes a cpuStats and adds in the current state of all GC CPU
				counters.
				
				gcMarkPhase indicates that we're in the mark phase and that certain counter
				values should be used.
			
				accumulateGCPauseTime add dt*stwProcs to the GC CPU pause time stats. dt should be
				the actual time spent paused, for orthogonality. maxProcs should be GOMAXPROCS,
				not work.stwprocs, since this number must be comparable to a total time computed
				from GOMAXPROCS.
		
			
			  var work
	
		worldStop provides context from the stop-the-world required by the
		start-the-world.
		
			
			finishedStopping int64
			reason stwReason
			startedStopping int64
			stoppingCPUTime int64
		
			
			func stopTheWorld(reason stwReason) worldStop
			func stopTheWorldGC(reason stwReason) worldStop
			func stopTheWorldWithSema(reason stwReason) worldStop
		
			
			func gcMarkTermination(stw worldStop)
			func startTheWorld(w worldStop)
			func startTheWorldGC(w worldStop)
			func startTheWorldWithSema(now int64, w worldStop) int64
		
			
			  var stopTheWorldContext
	
		
			
			
				// number of low-order bits to not overwrite
			
				// some pointer bits starting at the address addr.
			
				// offset in span that the low bit of mask represents the pointer state of.
			
				// number of bits in buf that are valid (including low)
		
			
			
				Flush the bits that have been written, and add zeros as needed
				to cover the full object [addr, addr+size).
			
				Add padding of size bytes.
			
				write appends the pointerness of the next valid pointer slots
				using the low valid bits of bits. 1=pointer, 0=scalar.
Package-Level Functions (total 1683, in which 34 are exported)
	
		Type Parameters:
			T: any
			S: any
		AddCleanup attaches a cleanup function to ptr. Some time after ptr is no longer
		reachable, the runtime will call cleanup(arg) in a separate goroutine.
		
		A typical use is that ptr is an object wrapping an underlying resource (e.g.,
		a File object wrapping an OS file descriptor), arg is the underlying resource
		(e.g., the OS file descriptor), and the cleanup function releases the underlying
		resource (e.g., by calling the close system call).
		
		There are few constraints on ptr. In particular, multiple cleanups may be
		attached to the same pointer, or to different pointers within the same
		allocation.
		
		If ptr is reachable from cleanup or arg, ptr will never be collected
		and the cleanup will never run. As a protection against simple cases of this,
		AddCleanup panics if arg is equal to ptr.
		
		There is no specified order in which cleanups will run.
		In particular, if several objects point to each other and all become
		unreachable at the same time, their cleanups all become eligible to run
		and can run in any order. This is true even if the objects form a cycle.
		
		Cleanups run concurrently with any user-created goroutines.
		Cleanups may also run concurrently with one another (unlike finalizers).
		If a cleanup function must run for a long time, it should create a new goroutine
		to avoid blocking the execution of other cleanups.
		
		If ptr has both a cleanup and a finalizer, the cleanup will only run once
		it has been finalized and becomes unreachable without an associated finalizer.
		
		The cleanup(arg) call is not always guaranteed to run; in particular it is not
		guaranteed to run before program exit.
		
		Cleanups are not guaranteed to run if the size of T is zero bytes, because
		it may share same address with other zero-size objects in memory. See
		https://go.dev/ref/spec#Size_and_alignment_guarantees.
		
		It is not guaranteed that a cleanup will run for objects allocated
		in initializers for package-level variables. Such objects may be
		linker-allocated, not heap-allocated.
		
		Note that because cleanups may execute arbitrarily far into the future
		after an object is no longer referenced, the runtime is allowed to perform
		a space-saving optimization that batches objects together in a single
		allocation slot. The cleanup for an unreferenced object in such an
		allocation may never run if it always exists in the same batch as a
		referenced object. Typically, this batching only happens for tiny
		(on the order of 16 bytes or less) and pointer-free objects.
		
		A cleanup may run as soon as an object becomes unreachable.
		In order to use cleanups correctly, the program must ensure that
		the object is reachable until it is safe to run its cleanup.
		Objects stored in global variables, or that can be found by tracing
		pointers from a global variable, are reachable. A function argument or
		receiver may become unreachable at the last point where the function
		mentions it. To ensure a cleanup does not get called prematurely,
		pass the object to the [KeepAlive] function after the last point
		where the object must remain reachable.
	
		BlockProfile returns n, the number of records in the current blocking profile.
		If len(p) >= n, BlockProfile copies the profile into p and returns n, true.
		If len(p) < n, BlockProfile does not change p and returns n, false.
		
		Most clients should use the [runtime/pprof] package or
		the [testing] package's -test.blockprofile flag instead
		of calling BlockProfile directly.
	
		Breakpoint executes a breakpoint trap.
	
		Caller reports file and line number information about function invocations on
		the calling goroutine's stack. The argument skip is the number of stack frames
		to ascend, with 0 identifying the caller of Caller. (For historical reasons the
		meaning of skip differs between Caller and [Callers].) The return values report
		the program counter, the file name (using forward slashes as path separator, even
		on Windows), and the line number within the file of the corresponding call.
		The boolean ok is false if it was not possible to recover the information.
	
		Callers fills the slice pc with the return program counters of function invocations
		on the calling goroutine's stack. The argument skip is the number of stack frames
		to skip before recording in pc, with 0 identifying the frame for Callers itself and
		1 identifying the caller of Callers.
		It returns the number of entries written to pc.
		
		To translate these PCs into symbolic information such as function
		names and line numbers, use [CallersFrames]. CallersFrames accounts
		for inlined functions and adjusts the return program counters into
		call program counters. Iterating over the returned slice of PCs
		directly is discouraged, as is using [FuncForPC] on any of the
		returned PCs, since these cannot account for inlining or return
		program counter adjustment.
	
		CallersFrames takes a slice of PC values returned by [Callers] and
		prepares to return function/file/line information.
		Do not change the slice until you are done with the [Frames].
	
		CPUProfile panics.
		It formerly provided raw access to chunks of
		a pprof-format profile generated by the runtime.
		The details of generating that format have changed,
		so this functionality has been removed.
		
		Deprecated: Use the [runtime/pprof] package,
		or the handlers in the [net/http/pprof] package,
		or the [testing] package's -test.cpuprofile flag instead.
	
		FuncForPC returns a *[Func] describing the function that contains the
		given program counter address, or else nil.
		
		If pc represents multiple functions because of inlining, it returns
		the *Func describing the innermost function, but with an entry of
		the outermost function.
	
		GC runs a garbage collection and blocks the caller until the
		garbage collection is complete. It may also block the entire
		program.
	
		Goexit terminates the goroutine that calls it. No other goroutine is affected.
		Goexit runs all deferred calls before terminating the goroutine. Because Goexit
		is not a panic, any recover calls in those deferred functions will return nil.
		
		Calling Goexit from the main goroutine terminates that goroutine
		without func main returning. Since func main has not returned,
		the program continues execution of other goroutines.
		If all other goroutines exit, the program crashes.
		
		It crashes if called from a thread not created by the Go runtime.
	
		GOMAXPROCS sets the maximum number of CPUs that can be executing
		simultaneously and returns the previous setting. It defaults to
		the value of [runtime.NumCPU]. If n < 1, it does not change the current setting.
		This call will go away when the scheduler improves.
	
		GOROOT returns the root of the Go tree. It uses the
		GOROOT environment variable, if set at process start,
		or else the root used during the Go build.
		
		Deprecated: The root used during the Go build will not be
		meaningful if the binary is copied to another machine.
		Use the system path to locate the “go” binary, and use
		“go env GOROOT” to find its GOROOT.
	
		GoroutineProfile returns n, the number of records in the active goroutine stack profile.
		If len(p) >= n, GoroutineProfile copies the profile into p and returns n, true.
		If len(p) < n, GoroutineProfile does not change p and returns n, false.
		
		Most clients should use the [runtime/pprof] package instead
		of calling GoroutineProfile directly.
	
		Gosched yields the processor, allowing other goroutines to run. It does not
		suspend the current goroutine, so execution resumes automatically.
	
		KeepAlive marks its argument as currently reachable.
		This ensures that the object is not freed, and its finalizer is not run,
		before the point in the program where KeepAlive is called.
		
		A very simplified example showing where KeepAlive is required:
		
			type File struct { d int }
			d, err := syscall.Open("/file/path", syscall.O_RDONLY, 0)
			// ... do something if err != nil ...
			p := &File{d}
			runtime.SetFinalizer(p, func(p *File) { syscall.Close(p.d) })
			var buf [10]byte
			n, err := syscall.Read(p.d, buf[:])
			// Ensure p is not finalized until Read returns.
			runtime.KeepAlive(p)
			// No more uses of p after this point.
		
		Without the KeepAlive call, the finalizer could run at the start of
		[syscall.Read], closing the file descriptor before syscall.Read makes
		the actual system call.
		
		Note: KeepAlive should only be used to prevent finalizers from
		running prematurely. In particular, when used with [unsafe.Pointer],
		the rules for valid uses of unsafe.Pointer still apply.
	
		LockOSThread wires the calling goroutine to its current operating system thread.
		The calling goroutine will always execute in that thread,
		and no other goroutine will execute in it,
		until the calling goroutine has made as many calls to
		[UnlockOSThread] as to LockOSThread.
		If the calling goroutine exits without unlocking the thread,
		the thread will be terminated.
		
		All init functions are run on the startup thread. Calling LockOSThread
		from an init function will cause the main function to be invoked on
		that thread.
		
		A goroutine should call LockOSThread before calling OS services or
		non-Go library functions that depend on per-thread state.
	
		MemProfile returns a profile of memory allocated and freed per allocation
		site.
		
		MemProfile returns n, the number of records in the current memory profile.
		If len(p) >= n, MemProfile copies the profile into p and returns n, true.
		If len(p) < n, MemProfile does not change p and returns n, false.
		
		If inuseZero is true, the profile includes allocation records
		where r.AllocBytes > 0 but r.AllocBytes == r.FreeBytes.
		These are sites where memory was allocated, but it has all
		been released back to the runtime.
		
		The returned profile may be up to two garbage collection cycles old.
		This is to avoid skewing the profile toward allocations; because
		allocations happen in real time but frees are delayed until the garbage
		collector performs sweeping, the profile only accounts for allocations
		that have had a chance to be freed by the garbage collector.
		
		Most clients should use the runtime/pprof package or
		the testing package's -test.memprofile flag instead
		of calling MemProfile directly.
	
		MutexProfile returns n, the number of records in the current mutex profile.
		If len(p) >= n, MutexProfile copies the profile into p and returns n, true.
		Otherwise, MutexProfile does not change p, and returns n, false.
		
		Most clients should use the [runtime/pprof] package
		instead of calling MutexProfile directly.
	
		NumCgoCall returns the number of cgo calls made by the current process.
	
		NumCPU returns the number of logical CPUs usable by the current process.
		
		The set of available CPUs is checked by querying the operating system
		at process startup. Changes to operating system CPU allocation after
		process startup are not reflected.
	
		NumGoroutine returns the number of goroutines that currently exist.
	
		ReadMemStats populates m with memory allocator statistics.
		
		The returned memory allocator statistics are up to date as of the
		call to ReadMemStats. This is in contrast with a heap profile,
		which is a snapshot as of the most recently completed garbage
		collection cycle.
	
		ReadTrace returns the next chunk of binary tracing data, blocking until data
		is available. If tracing is turned off and all the data accumulated while it
		was on has been returned, ReadTrace returns nil. The caller must copy the
		returned data before calling ReadTrace again.
		ReadTrace must be called from one goroutine at a time.
	
		SetBlockProfileRate controls the fraction of goroutine blocking events
		that are reported in the blocking profile. The profiler aims to sample
		an average of one blocking event per rate nanoseconds spent blocked.
		
		To include every blocking event in the profile, pass rate = 1.
		To turn off profiling entirely, pass rate <= 0.
	
		SetCgoTraceback records three C functions to use to gather
		traceback information from C code and to convert that traceback
		information into symbolic information. These are used when printing
		stack traces for a program that uses cgo.
		
		The traceback and context functions may be called from a signal
		handler, and must therefore use only async-signal safe functions.
		The symbolizer function may be called while the program is
		crashing, and so must be cautious about using memory.  None of the
		functions may call back into Go.
		
		The context function will be called with a single argument, a
		pointer to a struct:
		
			struct {
				Context uintptr
			}
		
		In C syntax, this struct will be
		
			struct {
				uintptr_t Context;
			};
		
		If the Context field is 0, the context function is being called to
		record the current traceback context. It should record in the
		Context field whatever information is needed about the current
		point of execution to later produce a stack trace, probably the
		stack pointer and PC. In this case the context function will be
		called from C code.
		
		If the Context field is not 0, then it is a value returned by a
		previous call to the context function. This case is called when the
		context is no longer needed; that is, when the Go code is returning
		to its C code caller. This permits the context function to release
		any associated resources.
		
		While it would be correct for the context function to record a
		complete a stack trace whenever it is called, and simply copy that
		out in the traceback function, in a typical program the context
		function will be called many times without ever recording a
		traceback for that context. Recording a complete stack trace in a
		call to the context function is likely to be inefficient.
		
		The traceback function will be called with a single argument, a
		pointer to a struct:
		
			struct {
				Context    uintptr
				SigContext uintptr
				Buf        *uintptr
				Max        uintptr
			}
		
		In C syntax, this struct will be
		
			struct {
				uintptr_t  Context;
				uintptr_t  SigContext;
				uintptr_t* Buf;
				uintptr_t  Max;
			};
		
		The Context field will be zero to gather a traceback from the
		current program execution point. In this case, the traceback
		function will be called from C code.
		
		Otherwise Context will be a value previously returned by a call to
		the context function. The traceback function should gather a stack
		trace from that saved point in the program execution. The traceback
		function may be called from an execution thread other than the one
		that recorded the context, but only when the context is known to be
		valid and unchanging. The traceback function may also be called
		deeper in the call stack on the same thread that recorded the
		context. The traceback function may be called multiple times with
		the same Context value; it will usually be appropriate to cache the
		result, if possible, the first time this is called for a specific
		context value.
		
		If the traceback function is called from a signal handler on a Unix
		system, SigContext will be the signal context argument passed to
		the signal handler (a C ucontext_t* cast to uintptr_t). This may be
		used to start tracing at the point where the signal occurred. If
		the traceback function is not called from a signal handler,
		SigContext will be zero.
		
		Buf is where the traceback information should be stored. It should
		be PC values, such that Buf[0] is the PC of the caller, Buf[1] is
		the PC of that function's caller, and so on.  Max is the maximum
		number of entries to store.  The function should store a zero to
		indicate the top of the stack, or that the caller is on a different
		stack, presumably a Go stack.
		
		Unlike runtime.Callers, the PC values returned should, when passed
		to the symbolizer function, return the file/line of the call
		instruction.  No additional subtraction is required or appropriate.
		
		On all platforms, the traceback function is invoked when a call from
		Go to C to Go requests a stack trace. On linux/amd64, linux/ppc64le,
		linux/arm64, and freebsd/amd64, the traceback function is also invoked
		when a signal is received by a thread that is executing a cgo call.
		The traceback function should not make assumptions about when it is
		called, as future versions of Go may make additional calls.
		
		The symbolizer function will be called with a single argument, a
		pointer to a struct:
		
			struct {
				PC      uintptr // program counter to fetch information for
				File    *byte   // file name (NUL terminated)
				Lineno  uintptr // line number
				Func    *byte   // function name (NUL terminated)
				Entry   uintptr // function entry point
				More    uintptr // set non-zero if more info for this PC
				Data    uintptr // unused by runtime, available for function
			}
		
		In C syntax, this struct will be
		
			struct {
				uintptr_t PC;
				char*     File;
				uintptr_t Lineno;
				char*     Func;
				uintptr_t Entry;
				uintptr_t More;
				uintptr_t Data;
			};
		
		The PC field will be a value returned by a call to the traceback
		function.
		
		The first time the function is called for a particular traceback,
		all the fields except PC will be 0. The function should fill in the
		other fields if possible, setting them to 0/nil if the information
		is not available. The Data field may be used to store any useful
		information across calls. The More field should be set to non-zero
		if there is more information for this PC, zero otherwise. If More
		is set non-zero, the function will be called again with the same
		PC, and may return different information (this is intended for use
		with inlined functions). If More is zero, the function will be
		called with the next PC value in the traceback. When the traceback
		is complete, the function will be called once more with PC set to
		zero; this may be used to free any information. Each call will
		leave the fields of the struct set to the same values they had upon
		return, except for the PC field when the More field is zero. The
		function must not keep a copy of the struct pointer between calls.
		
		When calling SetCgoTraceback, the version argument is the version
		number of the structs that the functions expect to receive.
		Currently this must be zero.
		
		The symbolizer function may be nil, in which case the results of
		the traceback function will be displayed as numbers. If the
		traceback function is nil, the symbolizer function will never be
		called. The context function may be nil, in which case the
		traceback function will only be called with the context field set
		to zero.  If the context function is nil, then calls from Go to C
		to Go will not show a traceback for the C portion of the call stack.
		
		SetCgoTraceback should be called only once, ideally from an init function.
	
		SetCPUProfileRate sets the CPU profiling rate to hz samples per second.
		If hz <= 0, SetCPUProfileRate turns off profiling.
		If the profiler is on, the rate cannot be changed without first turning it off.
		
		Most clients should use the [runtime/pprof] package or
		the [testing] package's -test.cpuprofile flag instead of calling
		SetCPUProfileRate directly.
	
		SetFinalizer sets the finalizer associated with obj to the provided
		finalizer function. When the garbage collector finds an unreachable block
		with an associated finalizer, it clears the association and runs
		finalizer(obj) in a separate goroutine. This makes obj reachable again,
		but now without an associated finalizer. Assuming that SetFinalizer
		is not called again, the next time the garbage collector sees
		that obj is unreachable, it will free obj.
		
		SetFinalizer(obj, nil) clears any finalizer associated with obj.
		
		New Go code should consider using [AddCleanup] instead, which is much
		less error-prone than SetFinalizer.
		
		The argument obj must be a pointer to an object allocated by calling
		new, by taking the address of a composite literal, or by taking the
		address of a local variable.
		The argument finalizer must be a function that takes a single argument
		to which obj's type can be assigned, and can have arbitrary ignored return
		values. If either of these is not true, SetFinalizer may abort the
		program.
		
		Finalizers are run in dependency order: if A points at B, both have
		finalizers, and they are otherwise unreachable, only the finalizer
		for A runs; once A is freed, the finalizer for B can run.
		If a cyclic structure includes a block with a finalizer, that
		cycle is not guaranteed to be garbage collected and the finalizer
		is not guaranteed to run, because there is no ordering that
		respects the dependencies.
		
		The finalizer is scheduled to run at some arbitrary time after the
		program can no longer reach the object to which obj points.
		There is no guarantee that finalizers will run before a program exits,
		so typically they are useful only for releasing non-memory resources
		associated with an object during a long-running program.
		For example, an [os.File] object could use a finalizer to close the
		associated operating system file descriptor when a program discards
		an os.File without calling Close, but it would be a mistake
		to depend on a finalizer to flush an in-memory I/O buffer such as a
		[bufio.Writer], because the buffer would not be flushed at program exit.
		
		It is not guaranteed that a finalizer will run if the size of *obj is
		zero bytes, because it may share same address with other zero-size
		objects in memory. See https://go.dev/ref/spec#Size_and_alignment_guarantees.
		
		It is not guaranteed that a finalizer will run for objects allocated
		in initializers for package-level variables. Such objects may be
		linker-allocated, not heap-allocated.
		
		Note that because finalizers may execute arbitrarily far into the future
		after an object is no longer referenced, the runtime is allowed to perform
		a space-saving optimization that batches objects together in a single
		allocation slot. The finalizer for an unreferenced object in such an
		allocation may never run if it always exists in the same batch as a
		referenced object. Typically, this batching only happens for tiny
		(on the order of 16 bytes or less) and pointer-free objects.
		
		A finalizer may run as soon as an object becomes unreachable.
		In order to use finalizers correctly, the program must ensure that
		the object is reachable until it is no longer required.
		Objects stored in global variables, or that can be found by tracing
		pointers from a global variable, are reachable. A function argument or
		receiver may become unreachable at the last point where the function
		mentions it. To make an unreachable object reachable, pass the object
		to a call of the [KeepAlive] function to mark the last point in the
		function where the object must be reachable.
		
		For example, if p points to a struct, such as os.File, that contains
		a file descriptor d, and p has a finalizer that closes that file
		descriptor, and if the last use of p in a function is a call to
		syscall.Write(p.d, buf, size), then p may be unreachable as soon as
		the program enters [syscall.Write]. The finalizer may run at that moment,
		closing p.d, causing syscall.Write to fail because it is writing to
		a closed file descriptor (or, worse, to an entirely different
		file descriptor opened by a different goroutine). To avoid this problem,
		call KeepAlive(p) after the call to syscall.Write.
		
		A single goroutine runs all finalizers for a program, sequentially.
		If a finalizer must run for a long time, it should do so by starting
		a new goroutine.
		
		In the terminology of the Go memory model, a call
		SetFinalizer(x, f) “synchronizes before” the finalization call f(x).
		However, there is no guarantee that KeepAlive(x) or any other use of x
		“synchronizes before” f(x), so in general a finalizer should use a mutex
		or other synchronization mechanism if it needs to access mutable state in x.
		For example, consider a finalizer that inspects a mutable field in x
		that is modified from time to time in the main program before x
		becomes unreachable and the finalizer is invoked.
		The modifications in the main program and the inspection in the finalizer
		need to use appropriate synchronization, such as mutexes or atomic updates,
		to avoid read-write races.
	
		SetMutexProfileFraction controls the fraction of mutex contention events
		that are reported in the mutex profile. On average 1/rate events are
		reported. The previous rate is returned.
		
		To turn off profiling entirely, pass rate 0.
		To just read the current rate, pass rate < 0.
		(For n>1 the details of sampling may change.)
	
		Stack formats a stack trace of the calling goroutine into buf
		and returns the number of bytes written to buf.
		If all is true, Stack formats stack traces of all other goroutines
		into buf after the trace for the current goroutine.
	
		StartTrace enables tracing for the current process.
		While tracing, the data will be buffered and available via [ReadTrace].
		StartTrace returns an error if tracing is already enabled.
		Most clients should use the [runtime/trace] package or the [testing] package's
		-test.trace flag instead of calling StartTrace directly.
	
		StopTrace stops tracing, if it was previously enabled.
		StopTrace only returns after all the reads for the trace have completed.
	
		ThreadCreateProfile returns n, the number of records in the thread creation profile.
		If len(p) >= n, ThreadCreateProfile copies the profile into p and returns n, true.
		If len(p) < n, ThreadCreateProfile does not change p and returns n, false.
		
		Most clients should use the runtime/pprof package instead
		of calling ThreadCreateProfile directly.
	
		UnlockOSThread undoes an earlier call to LockOSThread.
		If this drops the number of active LockOSThread calls on the
		calling goroutine to zero, it unwires the calling goroutine from
		its fixed operating system thread.
		If there are no active LockOSThread calls, this is a no-op.
		
		Before calling UnlockOSThread, the caller must ensure that the OS
		thread is suitable for running other goroutines. If the caller made
		any permanent changes to the state of the thread that would affect
		other goroutines, it should not call this function and thus leave
		the goroutine locked to the OS thread until the goroutine (and
		hence the thread) exits.
	
		Version returns the Go tree's version string.
		It is either the commit hash and date at the time of the build or,
		when possible, a release tag like "go1.3".
		
		How to extract and insert information held in the st_info field.
	 func _ELF_ST_TYPE(val byte) byte	
		abort crashes the runtime in situations where even throw might not
		work. In general it should do something a debugger will recognize
		(e.g., an INT3 on x86). A crash in abort is recognized by the
		signal handler, which will attempt to tear down the runtime
		immediately.
	
		abs returns the absolute value of x.
		
		Special cases are:
		
			abs(±Inf) = +Inf
			abs(NaN) = NaN
	
		Called from write_err_android.go only, but defined in sys_linux_*.s;
		declared here (instead of in write_err_android.go) for go vet on non-android builds.
		The return value is the raw syscall result, which may encode an error number.
	
		This function may be called in nosplit context and thus must be nosplit.
	
		Associate p and the current m.
		
		This function is allowed to have write barriers even if the caller
		isn't because it immediately acquires pp.
	
		activeModules returns a slice of active modules.
		
		A module is active once its gcdatamask and gcbssmask have been
		assembled and it is usable by the GC.
		
		This is nosplit/nowritebarrier because it is called by the
		cgo pointer checking code.
	
		Should be a built-in for unsafe.Pointer?
		
		add should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - fortio.org/log
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		add1 returns the byte pointer p+1.
	
		addb returns the byte pointer p+n.
	
		addCleanup attaches a cleanup function to the object. Multiple
		cleanups are allowed on an object, and even the same pointer.
		A cleanup id is returned which can be used to uniquely identify
		the cleanup.
	
		The compiler emits calls to runtime.addCovMeta
		but this code has moved to rtcov.AddMeta.
	
		Adds a newly allocated M to the extra M list.
	
		Adds a finalizer to the object p. Returns true if it succeeded.
	
		Called from linker-generated .initarray; declared for go vet; do NOT call from Go.
	
		addrsToSummaryRange converts base and limit pointers into a range
		of entries for the given summary level.
		
		The returned range is inclusive on the lower bound and exclusive on
		the upper bound.
	
		addspecial adds the special record s to the list of special records for
		the object p. All fields of s should be filled in except for
		offset & next, which this routine will fill in.
		Returns true if the special was successfully added, false otherwise.
		(The add will fail only if a record with the same p and s->kind
		already exists unless force is set to true.)
	 func adjustctxt(gp *g, adjinfo *adjustinfo)	 func adjustdefers(gp *g, adjinfo *adjustinfo)	
		Note: the argument/return area is adjusted by the callee.
	 func adjustpanics(gp *g, adjinfo *adjustinfo)	
		adjustpointer checks whether *vpp is in the old stack described by adjinfo.
		If so, it rewrites *vpp to point into the new stack.
	
		bv describes the memory starting at address scanp.
		Adjust any pointers contained therein.
	
		adjustSignalStack adjusts the current stack guard based on the
		stack pointer that is actually in use while handling a signal.
		We do this in case some non-Go code called sigaltstack.
		This reports whether the stack was adjusted, and if so stores the old
		signal stack in *gsigstack.
	 func adjustsudogs(gp *g, adjinfo *adjustinfo)	
		alignDown rounds n down to a multiple of a. a must be a power of 2.
	
		alignUp rounds n up to a multiple of a. a must be a power of 2.
	
		allGsSnapshot returns a snapshot of the slice of all Gs.
		
		The world must be stopped or allglock must be held.
	
		Allocate a new m unassociated with any thread.
		Can use p for allocation context if needed.
		fn is recorded as the new m's m.mstartfn.
		id is optional pre-allocated m ID. Omit by passing -1.
		
		This function is allowed to have write barriers even if the caller
		isn't because it borrows pp.
	
		arena_arena_Free is a wrapper around (*userArena).free.
	
		arena_arena_New is a wrapper around (*userArena).new, except that typ
		is an any (must be a *_type, still) and typ must be a type descriptor
		for a pointer to the type to actually be allocated, i.e. pass a *T
		to allocate a T. This is necessary because this function returns a *T.
	
		arena_arena_Slice is a wrapper around (*userArena).slice.
	
		arena_heapify takes a value that lives in an arena and makes a copy
		of it on the heap. Values that don't live in an arena are returned unmodified.
	
		arena_newArena is a wrapper around newUserArena.
	
		arenaBase returns the low address of the region covered by heap
		arena i.
	
		arenaIndex returns the index into mheap_.arenas of the arena
		containing metadata for p. This index combines of an index into the
		L1 map and an index into the L2 map and should be used as
		mheap_.arenas[ai.l1()][ai.l2()].
		
		If p is outside the range of valid heap addresses, either l1() or
		l2() will be out of bounds.
		
		It is nosplit because it's called by spanOf and several other
		nosplit functions.
	
		nosplit for use in linux startup sysargs.
	 func asanpoison(addr unsafe.Pointer, sz uintptr)	 func asanregisterglobals(addr unsafe.Pointer, sz uintptr)	 func asanunpoison(addr unsafe.Pointer, sz uintptr)	 func asmcgocall(fn, arg unsafe.Pointer) int32	 func asmcgocall_no_g(fn, arg unsafe.Pointer)	 func assertE2I(inter *interfacetype, t *_type) *itab	 func assertE2I2(inter *interfacetype, t *_type) *itab	 func assertLockHeld(l *mutex)	
		asyncPreempt saves all user registers and calls asyncPreempt2.
		
		When stack scanning encounters an asyncPreempt frame, it scans that
		frame and its parent frame conservatively.
		
		asyncPreempt is implemented in assembly.
	
		atoi is like atoi64 but for integers
		that fit into an int.
	
		atoi32 is like atoi but for integers
		that fit into an int32.
	
		atoi64 parses an int64 from a string s.
		The bool result reports whether s is a number
		representable by a value of type int64.
	
		atomic_casPointer is the implementation of runtime/internal/UnsafePointer.CompareAndSwap
		(like CompareAndSwapNoWB but with the write barrier).
	
		atomic_storePointer is the implementation of runtime/internal/UnsafePointer.Store
		(like StoreNoWB but with the write barrier).
	
		atomicAllG returns &allgs[0] and len(allgs) for use with atomicAllGIndex.
	
		atomicAllGIndex returns ptr[i] with the allgptr returned from atomicAllG.
	
		atomicstorep performs *ptr = new atomically and invokes a write barrier.
	
		atomicwb performs a write barrier before an atomic pointer write.
		The caller should guard the call with "if writeBarrier.enabled".
		
		atomicwb should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/gopkg
		  - github.com/songzhibin97/gkit
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		called from assembly.
	
		badDefer returns a fixed bad defer pointer for poisoning an atomic defer list head.
	
		called from assembly.
	
		badPointer throws bad pointer in heap panic.
	
		This runs on a foreign stack, without an m or a g. No stack split.
	
		badTimer is called if the timer data structures have been corrupted,
		presumably due to racy use by the program. We panic here rather than
		panicking due to invalid slice access while holding locks.
		See issue #25686.
	
		Background scavenger.
		
		The background scavenger maintains the RSS of the application below
		the line described by the proportional scavenging statistics in
		the mheap struct.
	
		Build a binary search tree with the n objects in the list
		x.obj[idx], x.obj[idx+1], ..., x.next.obj[0], ...
		Returns the root of that tree, and the buf+idx of the nth object after x.obj[idx].
		(The first object that was not included in the binary search tree.)
		If n == 0, returns nil, x.
	
		blockableSig reports whether sig may be blocked by the signal mask.
		We never want to block the signals marked _SigUnblock;
		these are the synchronous signals that turn into a Go panic.
		We never want to block the preemption signal if it is being used.
		In a Go program--not a c-archive/c-shared--we never want to block
		the signals marked _SigKill or _SigThrow, as otherwise it's possible
		for all running threads to block them and delay their delivery until
		we start a new thread. When linked into a C program we let the C code
		decide on the disposition of those signals.
	
		blockAlignSummaryRange aligns indices into the given level to that
		level's block width (1 << levelBits[level]). It assumes lo is inclusive
		and hi is exclusive, and so aligns them down and up respectively.
	 func blockevent(cycles int64, skip int)	
		blockProfileInternal returns the number of records n in the profile. If there
		are less than size records, copyFn is invoked for each record, and ok returns
		true.
	
		blocksampled returns true for all events where cycles >= rate. Shorter
		events have a cycles/rate random chance of returning true.
	
		blockTimerChan is called when a channel op has decided to block on c.
		The caller holds the channel lock for c and possibly other channels.
		blockTimerChan makes sure that c is in a timer heap,
		adding it if needed.
	
		blockUntilEmptyFinalizerQueue blocks until either the finalizer
		queue is emptied (and the finalizers have executed) or the timeout
		is reached. Returns true if the finalizer queue was emptied.
		This is used by the runtime and sync tests.
	
		bool2int returns 0 if x is false or 1 if x is true.
	
		bootstrapRand returns a random uint64 from the global random generator.
	
		bootstrapRandReseed reseeds the bootstrap random number generator,
		clearing from memory any trace of previously returned random numbers.
	
		bswapIfBigEndian swaps the byte order of the uintptr on goarch.BigEndian platforms,
		and leaves it alone elsewhere.
	
		buildGCMask writes the ptr/nonptr bitmap for t to dst.
		t must have a pointer.
	
		buildInterfaceSwitchCache constructs an interface switch cache
		containing all the entries from oldC plus the new entry
		(typ,case_,tab).
	 func buildTypeAssertCache(oldC *abi.TypeAssertCache, typ *_type, tab *itab) *abi.TypeAssertCache	
		bulkBarrierBitmap executes write barriers for copying from [src,
		src+size) to [dst, dst+size) using a 1-bit pointer bitmap. src is
		assumed to start maskOffset bytes into the data covered by the
		bitmap in bits (which may not be a multiple of 8).
		
		This is used by bulkBarrierPreWrite for writes to data and BSS.
	
		bulkBarrierPreWrite executes a write barrier
		for every pointer slot in the memory range [src, src+size),
		using pointer/scalar information from [dst, dst+size).
		This executes the write barriers necessary before a memmove.
		src, dst, and size must be pointer-aligned.
		The range [dst, dst+size) must lie within a single object.
		It does not perform the actual writes.
		
		As a special case, src == 0 indicates that this is being used for a
		memclr. bulkBarrierPreWrite will pass 0 for the src of each write
		barrier.
		
		Callers should call bulkBarrierPreWrite immediately before
		calling memmove(dst, src, size). This function is marked nosplit
		to avoid being preempted; the GC must not stop the goroutine
		between the memmove and the execution of the barriers.
		The caller is also responsible for cgo pointer checks if this
		may be writing Go pointers into non-Go memory.
		
		Pointer data is not maintained for allocations containing
		no pointers at all; any caller of bulkBarrierPreWrite must first
		make sure the underlying allocation contains pointers, usually
		by checking typ.PtrBytes.
		
		The typ argument is the type of the space at src and dst (and the
		element type if src and dst refer to arrays) and it is optional.
		If typ is nil, the barrier will still behave as expected and typ
		is used purely as an optimization. However, it must be used with
		care.
		
		If typ is not nil, then src and dst must point to one or more values
		of type typ. The caller must ensure that the ranges [src, src+size)
		and [dst, dst+size) refer to one or more whole values of type src and
		dst (leaving off the pointerless tail of the space is OK). If this
		precondition is not followed, this function will fail to scan the
		right pointers.
		
		When in doubt, pass nil for typ. That is safe and will always work.
		
		Callers must perform cgo checks if goexperiment.CgoCheck2.
	
		bulkBarrierPreWriteSrcOnly is like bulkBarrierPreWrite but
		does not execute write barriers for [dst, dst+size).
		
		In addition to the requirements of bulkBarrierPreWrite
		callers need to ensure [dst, dst+size) is zeroed.
		
		This is used for special cases where e.g. dst was just
		created and zeroed with malloc.
		
		The type of the space can be provided purely as an optimization.
		See bulkBarrierPreWrite's comment for more details -- use this
		optimization with great care.
	 func bytealg_MakeNoZero(len int) []byte	 func call1024(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call1048576(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call1073741824(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call128(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call131072(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call134217728(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	
		in asm_*.s
		not called directly; definitions here supply type information for traceback.
		These must have the same signature (arg pointer map) as reflectcall.
	 func call16384(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call16777216(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call2048(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call2097152(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call256(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call262144(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call268435456(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call32(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call32768(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call33554432(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call4096(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call4194304(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call512(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call524288(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call536870912(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call64(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call65536(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call67108864(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call8192(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	 func call8388608(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)	
		Set or reset the system stack bounds for a callback on sp.
		
		Must be nosplit because it is called by needm prior to fully initializing
		the M.
	
		callCgoMmap calls the mmap function in the runtime/cgo package
		using the GCC calling convention. It is implemented in assembly.
	
		callCgoMunmap calls the munmap function in the runtime/cgo package
		using the GCC calling convention. It is implemented in assembly.
	
		callCgoSigaction calls the sigaction function in the runtime/cgo package
		using the GCC calling convention. It is implemented in assembly.
	
		callCgoSymbolizer calls the cgoSymbolizer function.
	
		callers should be an internal detail,
		(and is almost identical to Callers),
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/phuslu/log
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		canpanic returns false if a signal should throw instead of
		panicking.
	
		canPreemptM reports whether mp is in a state that is safe to preempt.
		
		It is nosplit because it has nosplit callers.
	 func cansemacquire(addr *uint32) bool	
		The Gscanstatuses are acting like locks and this releases them.
		If it proves to be a performance hit we should be able to make these
		simple atomic stores but for now we are going to throw if
		we see an inconsistent state.
	
		casGFromPreempted attempts to transition gp from _Gpreempted to
		_Gwaiting. If successful, the caller is responsible for
		re-scheduling gp.
	
		If asked to move to or from a Gscanstatus this will throw. Use the castogscanstatus
		and casfrom_Gscanstatus instead.
		casgstatus will loop if the g->atomicstatus is in a Gscan status until the routine that
		put it in the Gscan state is finished.
	
		casGToPreemptScan transitions gp from _Grunning to _Gscan|_Gpreempted.
		
		TODO(austin): This is the only status operation that both changes
		the status and locks the _Gscan bit. Rethink this.
	
		casGToWaiting transitions gp from old to _Gwaiting, and sets the wait reason.
		
		Use this over casgstatus when possible to ensure that a waitreason is set.
	
		casGToWaitingForSuspendG transitions gp from old to _Gwaiting, and sets the wait reason.
		The wait reason must be a valid isWaitingForSuspendG wait reason.
		
		Use this over casgstatus when possible to ensure that a waitreason is set.
	
		This will return false if the gp is not in the expected status and the cas fails.
		This acts like a lock acquire while the casfromgstatus acts like a lock release.
	
		bindm store the g0 of the current m into a thread-specific value.
		
		We allocate a pthread per-thread variable using pthread_key_create,
		to register a thread-exit-time destructor.
		We are here setting the thread-specific value of the pthread key, to enable the destructor.
		So that the pthread_key_destructor would dropm while the C thread is exiting.
		
		And the saved g will be used in pthread_key_destructor,
		since the g stored in the TLS by Go might be cleared in some platforms,
		before the destructor invoked, so, we restore g by the stored g, before dropm.
		
		We store g0 instead of m, to make the assembly code simpler,
		since we need to restore g0 in runtime.cgocallback.
		
		On systems without pthreads, like Windows, bindm shouldn't be used.
		
		NOTE: this always runs without a P, so, nowritebarrierrec required.
	
		Call from Go to C.
		
		This must be nosplit because it's used for syscalls on some
		platforms. Syscalls may have untyped arguments on the stack, so
		it's not safe to grow or scan the stack.
		
		cgocall should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ebitengine/purego
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		Not all cgocallback frames are actually cgocallback,
		so not all have these arguments. Mark them uintptr so that the GC
		does not misinterpret memory when the arguments are not present.
		cgocallback is not called from Go, only from crosscall2.
		This in turn calls cgocallbackg, which is where we'll find
		pointer-declared arguments.
		
		When fn is nil (frame is saved g), call dropm instead,
		this is used when the C thread is exiting.
	
		Call from C back to Go. fn must point to an ABIInternal Go entry-point.
	 func cgocallbackg1(fn, frame unsafe.Pointer, ctxt uintptr)	
		cgoCheckArg is the real work of cgoCheckPointer. The argument p
		is either a pointer to the value (of type t), or the value itself,
		depending on indir. The top parameter is whether we are at the top
		level, where Go pointers are allowed. Go pointers to pinned objects are
		allowed as long as they don't reference other unpinned pointers.
	
		cgoCheckBits checks the block of memory at src, for up to size
		bytes, and throws if it finds an unpinned Go pointer. The gcbits mark each
		pointer value. The src pointer is off bytes into the gcbits.
	
		cgoCheckMemmove is called when moving a block of memory.
		It throws if the program is copying a block that contains an unpinned Go
		pointer into non-Go memory.
		
		This is called from generated code when GOEXPERIMENT=cgocheck2 is enabled.
	
		cgoCheckMemmove2 is called when moving a block of memory.
		dst and src point off bytes into the value to copy.
		size is the number of bytes to copy.
		It throws if the program is copying a block that contains an unpinned Go
		pointer into non-Go memory.
	
		cgoCheckPointer checks if the argument contains a Go pointer that
		points to an unpinned Go pointer, and panics if it does.
	
		cgoCheckPtrWrite is called whenever a pointer is stored into memory.
		It throws if the program is storing an unpinned Go pointer into non-Go
		memory.
		
		This is called from generated code when GOEXPERIMENT=cgocheck2 is enabled.
	
		cgoCheckResult is called to check the result parameter of an
		exported Go function. It panics if the result is or contains any
		other pointer into unpinned Go memory.
	
		cgoCheckSliceCopy is called when copying n elements of a slice.
		src and dst are pointers to the first element of the slice.
		typ is the element type of the slice.
		It throws if the program is copying slice elements that contain unpinned Go
		pointers into non-Go memory.
	
		cgoCheckTypedBlock checks the block of memory at src, for up to size bytes,
		and throws if it finds an unpinned Go pointer. The type of the memory is typ,
		and src is off bytes into that type.
	
		cgoCheckUnknownPointer is called for an arbitrary pointer into Go
		memory. It checks whether that Go memory contains any other
		pointer into unpinned Go memory. If it does, we panic.
		The return values are unused but useful to see in panic tracebacks.
	
		cgoCheckUsingType is like cgoCheckTypedBlock, but is a last ditch
		fall back to look for pointers in src using the type information.
		We only use this when looking at a value on the stack when the type
		uses a GC program, because otherwise it's more efficient to use the
		GC bits. This is called on the system stack.
	
		cgoContextPCs gets the PC values from a cgo traceback.
	
		cgoInRange reports whether p is between start and end.
	
		cgoIsGoPointer reports whether the pointer is a Go pointer--a
		pointer to Go memory. We only care about Go memory that might
		contain pointers.
	
		cgoKeepAlive is called by cgo-generated code (using go:linkname to get at
		an unexported name). This call keeps its argument alive until the call site;
		cgo emits the call after the last possible use of the argument by C code.
		cgoKeepAlive is marked in the cgo-generated code as //go:noescape, so
		unlike cgoUse it does not force the argument to escape to the heap.
		This is used to implement the #cgo noescape directive.
	
		called from (incomplete) assembly.
	
		cgoUse is called by cgo-generated code (using go:linkname to get at
		an unexported name). The calls serve two purposes:
		1) they are opaque to escape analysis, so the argument is considered to
		escape to the heap.
		2) they keep the argument alive until the call site; the call is emitted after
		the end of the (presumed) use of the argument by C.
		cgoUse should not actually be called (see cgoAlwaysFalse).
	
		chanbuf(c, i) is pointer to the i'th slot in the buffer.
		
		chanbuf should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/fjl/memsize
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		chanrecv receives on channel c and writes the received data to ep.
		ep may be nil, in which case received data is ignored.
		If block == false and no elements are available, returns (false, false).
		Otherwise, if c is closed, zeros *ep and returns (true, false).
		Otherwise, fills in *ep with an element and returns (true, true).
		A non-nil ep must point to the heap or the caller's stack.
	
		entry points for <- c from compiled code.
	
		* generic single channel send/recv
		* If block is not nil,
		* then the protocol will not
		* sleep but return if it could
		* not complete.
		*
		* sleep can wake up with g.param == nil
		* when a channel involved in the sleep has
		* been closed.  it is easiest to loop and re-run
		* the operation; we'll see that it's now closed.
	
		entry point for c <- x from compiled code.
	
		cheaprand is a non-cryptographic-quality 32-bit random generator
		suitable for calling at very high frequency (such as during scheduling decisions)
		and at sensitive moments in the runtime (such as during stack unwinding).
		it is "cheap" in the sense of both expense and quality.
		
		cheaprand must not be exported to other packages:
		the rule is that other packages using runtime-provided
		randomness must always use rand.
		
		cheaprand should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/gopkg
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		cheaprand64 is a non-cryptographic-quality 63-bit random generator
		suitable for calling at very high frequency (such as during sampling decisions).
		it is "cheap" in the sense of both expense and quality.
		
		cheaprand64 must not be exported to other packages:
		the rule is that other packages using runtime-provided
		randomness must always use rand.
		
		cheaprand64 should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/zhangyunhao116/fastrand
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		cheaprandn is like cheaprand() % n but faster.
		
		cheaprandn must not be exported to other packages:
		the rule is that other packages using runtime-provided
		randomness must always use randn.
		
		cheaprandn should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/phuslu/log
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		checkASM reports whether assembly runtime checks have passed.
	
		Check for deadlock situation.
		The check is based on number of running M's, if 0 -> deadlock.
		sched.lock must be held.
	
		Check for idle-priority GC, without a P on entry.
		
		If some GC work, a P, and a worker G are all available, the P and G will be
		returned. The returned P has not been wired yet.
	
		sched.lock must be held.
	
		checkptrBase returns the base address for the allocation containing
		the address p.
		
		Importantly, if p1 and p2 point into the same variable, then
		checkptrBase(p1) == checkptrBase(p2). However, the converse/inverse
		is not necessarily true as allocations can have trailing padding,
		and multiple variables may be packed into a single allocation.
		
		checkptrBase should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		checkptrStraddles reports whether the first size-bytes of memory
		addressed by ptr is known to straddle more than one Go allocation.
	
		Check all Ps for a runnable G to steal.
		
		On entry we have no P. If a G is available to steal and a P is available,
		the P is returned which the caller should acquire and attempt to steal the
		work to.
	
		Check all Ps for a timer expiring sooner than pollUntil.
		
		Returns updated pollUntil value.
	
		chunkBase returns the base address of the palloc chunk at index ci.
	
		chunkIndex returns the global index of the palloc chunk containing the
		pointer p.
	
		chunkPageIndex computes the index of the page that contains p,
		relative to the chunk which contains p.
	
		clearSignalHandlers clears all signal handlers that are not ignored
		back to the default. This is called by the child after a fork, so that
		we can enable the signal mask for the exec without worrying about
		running a signal handler in the child.
	
		clobberfree sets the memory content at x to bad content, for debugging
		purposes.
	 func closeonexec(fd int32)	 func compute0(_ *statAggregate, out *metricValue)	 func concatbyte2(a0, a1 string) []byte	 func concatbyte3(a0, a1, a2 string) []byte	 func concatbyte4(a0, a1, a2, a3 string) []byte	 func concatbyte5(a0, a1, a2, a3, a4 string) []byte	
		concatbytes implements a Go string concatenation x+y+z+... returning a slice
		of bytes.
		The operands are passed in the slice a.
	 func concatstring2(buf *tmpBuf, a0, a1 string) string	 func concatstring3(buf *tmpBuf, a0, a1, a2 string) string	 func concatstring4(buf *tmpBuf, a0, a1, a2, a3 string) string	 func concatstring5(buf *tmpBuf, a0, a1, a2, a3, a4 string) string	
		concatstrings implements a Go string concatenation x+y+z+...
		The operands are passed in the slice a.
		If buf != nil, the compiler has determined that the result does not
		escape the calling function, so the string data can be stored in buf
		if small enough.
	
		convT converts a value of type t, which is pointed to by v, to a pointer that can
		be used as the second word of an interface value.
	
		convT64 should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		convTslice should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		convTstring should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		copyBlockProfileRecord copies the sample values and call stack from src to dst.
		The call stack is copied as-is. The caller is responsible for handling inline
		expansion, needed when the call stack was collected with frame pointer unwinding.
	
		copysign returns a value with the magnitude
		of x and the sign of y.
	
		Copies gp's stack to a new stack of a different size.
		Caller must have changed gp status to Gcopystack.
	
		coroexit is like coroswitch but closes the coro
		and exits the current goroutine
	
		corostart is the entry func for a new coroutine.
		It runs the coroutine user function f passed to corostart
		and then calls coroexit to remove the extra concurrency.
	
		coroswitch switches to the goroutine blocked on c
		and then blocks the current goroutine on c.
	
		coroswitch_m is the implementation of coroswitch
		that runs on the m stack.
		
		Note: Coroutine switches are expected to happen at
		an order of magnitude (or more) higher frequency
		than regular goroutine switches, so this path is heavily
		optimized to remove unnecessary work.
		The fast path here is three CAS: the one at the top on gp.atomicstatus,
		the one in the middle to choose the next g,
		and the one at the bottom on gnext.atomicstatus.
		It is important not to add more atomic operations or other
		expensive operations to the fast path.
	
		countrunes returns the number of runes in s.
	
		countSub subtracts two counts obtained from profIndex.dataCount or profIndex.tagCount,
		assuming that they are no more than 2^29 apart (guaranteed since they are never more than
		len(data) or len(tags) apart, respectively).
		tagCount wraps at 2^30, while dataCount wraps at 2^32.
		This function works for both.
	
		cpuinit sets up CPU feature flags and calls internal/cpu.Initialize. env should be the complete
		value of the GODEBUG environment variable.
	
		careful: cputicks is not guaranteed to be monotonic! In particular, we have
		noticed drift between cpus on certain os/arch combinations. See issue 8976.
	
		create returns an fd to a write-only file.
	
		debugCallCheck checks whether it is safe to inject a debugger
		function call with return PC pc. If not, it returns a string
		explaining why.
	 func debugCallPanicked(val any)	
		debugCallWrap starts a new goroutine to run a debug call and blocks
		the calling goroutine. On the goroutine, it prepares to recover
		panics from the debug call, and then calls the call dispatching
		function at PC dispatch.
		
		This must be deeply nosplit because there are untyped values on the
		stack from debugCallV2.
	
		debugCallWrap1 is the continuation of debugCallWrap on the callee
		goroutine.
	 func debugCallWrap2(dispatch uintptr)	
		debugPinnerV1 returns a new Pinner that pins itself. This function can be
		used by debuggers to easily obtain a Pinner that will not be garbage
		collected (or moved in memory) even if no references to it exist in the
		target program. This pinner in turn can be used to extend this property
		to other objects, which debuggers can use to simplify the evaluation of
		expressions involving multiple call injections.
	
		decoderune returns the non-ASCII rune at the start of
		s[k:] and the index after the rune in s.
		
		decoderune assumes that caller has checked that
		the to be decoded rune is a non-ASCII rune.
		
		If the string appears to be incomplete or decoding problems
		are encountered (runeerror, k + 1) is returned to ensure
		progress when decoderune is used to iterate over a string.
	
		deductAssistCredit reduces the current G's assist credit
		by size bytes, and assists the GC if necessary.
		
		Caller must be preemptible.
		
		Returns the G for which the assist credit was accounted.
	
		deductSweepCredit deducts sweep credit for allocating a span of
		size spanBytes. This must be performed *before* the span is
		allocated to ensure the system has enough credit. If necessary, it
		performs sweeping to prevent going in to debt. If the caller will
		also sweep pages (e.g., for a large allocation), it can pass a
		non-zero callerSweepPages to leave that many pages unswept.
		
		deductSweepCredit makes a worst-case assumption that all spanBytes
		bytes of the ultimately allocated span will be available for object
		allocation.
		
		deductSweepCredit is the core of the "proportional sweep" system.
		It uses statistics gathered by the garbage collector to perform
		enough sweeping so that all pages are swept during the concurrent
		sweep phase between GC cycles.
		
		mheap_ must NOT be locked.
	
		deferconvert converts the rangefunc defer list of d0 into an ordinary list
		following d0.
		See the doc comment for deferrangefunc for details.
	
		Create a new deferred function fn, which has no arguments and results.
		The compiler turns a defer statement into a call to this.
	
		deferprocat is like deferproc but adds to the atomic list represented by frame.
		See the doc comment for deferrangefunc for details.
	
		deferprocStack queues a new deferred function with a defer record on the stack.
		The defer record must have its fn field initialized.
		All other fields can contain junk.
		Nosplit because of the uninitialized pointer fields on the stack.
	
		deferrangefunc is called by functions that are about to
		execute a range-over-function loop in which the loop body
		may execute a defer statement. That defer needs to add to
		the chain for the current function, not the func literal synthesized
		to represent the loop body. To do that, the original function
		calls deferrangefunc to obtain an opaque token representing
		the current frame, and then the loop body uses deferprocat
		instead of deferproc to add to that frame's defer lists.
		
		The token is an 'any' with underlying type *atomic.Pointer[_defer].
		It is the atomically-updated head of a linked list of _defer structs
		representing deferred calls. At the same time, we create a _defer
		struct on the main g._defer list with d.head set to this head pointer.
		
		The g._defer list is now a linked list of deferred calls,
		but an atomic list hanging off:
		
				g._defer => d4 -> d3 -> drangefunc -> d2 -> d1 -> nil
			                             | .head
			                             |
			                             +--> dY -> dX -> nil
		
		with each -> indicating a d.link pointer, and where drangefunc
		has the d.rangefunc = true bit set.
		Note that the function being ranged over may have added
		its own defers (d4 and d3), so drangefunc need not be at the
		top of the list when deferprocat is used. This is why we pass
		the atomic head explicitly.
		
		To keep misbehaving programs from crashing the runtime,
		deferprocat pushes new defers onto the .head list atomically.
		The fact that it is a separate list from the main goroutine
		defer list means that the main goroutine's defers can still
		be handled non-atomically.
		
		In the diagram, dY and dX are meant to be processed when
		drangefunc would be processed, which is to say the defer order
		should be d4, d3, dY, dX, d2, d1. To make that happen,
		when defer processing reaches a d with rangefunc=true,
		it calls deferconvert to atomically take the extras
		away from d.head and then adds them to the main list.
		
		That is, deferconvert changes this list:
		
				g._defer => drangefunc -> d2 -> d1 -> nil
			                 | .head
			                 |
			                 +--> dY -> dX -> nil
		
		into this list:
		
			g._defer => dY -> dX -> d2 -> d1 -> nil
		
		It also poisons *drangefunc.head so that any future
		deferprocat using that head will throw.
		(The atomic head is ordinary garbage collected memory so that
		it's not a problem if user code holds onto it beyond
		the lifetime of drangefunc.)
		
		TODO: We could arrange for the compiler to call into the
		runtime after the loop finishes normally, to do an eager
		deferconvert, which would catch calling the loop body
		and having it defer after the loop is done. If we have a
		more general catch of loop body misuse, though, this
		might not be worth worrying about in addition.
		
		See also ../cmd/compile/internal/rangefunc/rewrite.go.
	
		deferreturn runs deferred functions for the caller's frame.
		The compiler inserts a call to this at the end of any
		function which calls defer.
	
		dieFromSignal kills the program with a signal.
		This provides the expected exit status for the shell.
		This is only called with fatal signals expected to kill the process.
	
		128/64 -> 64 quotient, 64 remainder.
		adapted from hacker's delight
	
		divRoundUp returns ceil(n / a).
	
		dlog returns a debug logger. The caller can use methods on the
		returned logger to add values, which will be space-separated in the
		final output, much like println. The caller must call end() to
		finish the message.
		
		dlog can be used from highly-constrained corners of the runtime: it
		is safe to use in the signal handler, from within the write
		barrier, from within the stack implementation, and in places that
		must be recursively nosplit.
		
		This will be compiled away if built without the debuglog build tag.
		However, argument construction may not be. If any of the arguments
		are not literals or trivial expressions, consider protecting the
		call with "if dlogEnabled".
	
		dolockOSThread is called by LockOSThread and lockOSThread below
		after they modify m.locked. Do not allow preemption during this call,
		or else the m might be different in this function than in the caller.
	
		gp is the crashing g running on this M, but may be a user G, while getg() is
		always g0.
	
		doRecordGoroutineProfile writes gp1's call stack and labels to an in-progress
		goroutine profile. Preemption is disabled.
		
		This may be called via tryRecordGoroutineProfile in two ways: by the
		goroutine that is coordinating the goroutine profile (running on its own
		stack), or from the scheduler in preparation to execute gp1 (running on the
		system stack).
	
		doSigPreempt handles a preemption signal on gp.
	 func doubleCheckHeapPointersInterior(x, interior, size, dataSize uintptr, typ *_type, header **_type, span *mspan)	 func doubleCheckTypePointersOfType(s *mspan, typ *_type, addr, size uintptr)	
		dounlockOSThread is called by UnlockOSThread and unlockOSThread below
		after they update m->locked. Do not allow preemption during this call,
		or else the m might be in different in this function than in the caller.
	
		dropg removes the association between m and the current goroutine m->curg (gp for short).
		Typically a caller sets gp's status away from Grunning and then
		immediately calls dropg to finish the job. The caller is also responsible
		for arranging that gp will be restarted using ready at an
		appropriate time. After calling dropg and arranging for gp to be
		readied later, the caller can do other work but eventually should
		call schedule to restart the scheduling of goroutines on this m.
	
		dropm puts the current m back onto the extra list.
		
		1. On systems without pthreads, like Windows
		dropm is called when a cgo callback has called needm but is now
		done with the callback and returning back into the non-Go thread.
		
		The main expense here is the call to signalstack to release the
		m's signal stack, and then the call to needm on the next callback
		from this thread. It is tempting to try to save the m for next time,
		which would eliminate both these costs, but there might not be
		a next time: the current thread (which Go does not control) might exit.
		If we saved the m for that thread, there would be an m leak each time
		such a thread exited. Instead, we acquire and release an m on each
		call. These should typically not be scheduling operations, just a few
		atomics, so the cost should be small.
		
		2. On systems with pthreads
		dropm is called while a non-Go thread is exiting.
		We allocate a pthread per-thread variable using pthread_key_create,
		to register a thread-exit-time destructor.
		And store the g into a thread-specific value associated with the pthread key,
		when first return back to C.
		So that the destructor would invoke dropm while the non-Go thread is exiting.
		This is much faster since it avoids expensive signal-related syscalls.
		
		This always runs without a P, so //go:nowritebarrierrec is required.
		
		This may run with a different stack than was recorded in g0 (there is no
		call to callbackUpdateSystemStack prior to dropm), so this must be
		//go:nosplit to avoid the stack bounds check.
	
		dump kinds & offsets of interesting fields in bv.
	
		dumpint() the kind & offset of each field in an object.
	 func dumpGCProg(p *byte)	 func dumpgoroutine(gp *g)	 func dumpgstatus(gp *g)	
		dump a uint64 in a varint format parseable by encoding/binary.
	
		dump varint uint64 length followed by memory contents.
	 func dumpmemstats(m *MemStats)	
		dump an object.
	 func dumpotherroot(description string, to unsafe.Pointer)	 func dumpStacksRec(node *traceMapNode, w traceWriter, stackBuf []uintptr) traceWriter	
		dump information for a type.
	 func dumpTypesRec(node *traceMapNode, w traceWriter) traceWriter	 func dwritebyte(b byte)	
		elideWrapperCalling reports whether a wrapper function that called
		function id should be elided from stack traces.
	
		empty reports whether a read from c would block (that is, the channel is
		empty).  It is atomically correct and sequentially consistent at the moment
		it returns, but since the channel is unlocked, the channel may become
		non-empty immediately afterward.
	
		enableWER is called by setTraceback("wer").
		Windows Error Reporting (WER) is only supported on Windows.
	
		encoderune writes into p (which must be large enough) the UTF-8 encoding of the rune.
		It returns the number of bytes written.
	
		endCheckmarks ends the checkmarks phase.
	
		ensureSigM starts one global, sleeping thread to make sure at least one thread
		is available to catch signals enabled for os/signal.
	
		Standard syscall entry used by the go syscall library and normal cgo calls.
		
		This is exported via linkname to assembly in the syscall package and x/sys.
		
		Other packages should not be accessing entersyscall directly,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gvisor.dev/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		entersyscallblock should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gvisor.dev/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		envKeyEqual reports whether a == b, with ASCII-only case insensitivity
		on Windows. The two strings must have the same length.
	
		Schedules gp to run on the current M.
		If inheritTime is true, gp inherits the remaining time in the
		current time slice. Otherwise, it starts a new time slice.
		Never returns.
		
		Write barriers are allowed because this is called immediately after
		acquiring a P in several places.
	
		The goroutine g exited its system call.
		Arrange for it to run on a cpu again.
		This is called only from the go syscall library, not
		from the low-level system calls used by the runtime.
		
		Write barriers are not allowed because our P may have been stolen.
		
		This is exported via linkname to assembly in the syscall package.
		
		exitsyscall should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gvisor.dev/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		exitsyscall slow path on g0.
		Failed to acquire P, enqueue gp as runnable.
		
		Called via mcall, so gp is the calling g from this M.
	 func exitsyscallfast(oldp *p) bool	
		exitsyscallfast_reacquired is the exitsyscall path on which this G
		has successfully reacquired the P it was running on before the
		syscall.
	
		exitThread terminates the current thread, writing *wait = freeMStack when
		the stack is safe to reclaim.
	
		expandCgoFrames expands frame information for pc, known to be
		a non-Go function, using the cgoSymbolizer hook. expandCgoFrames
		returns nil if pc could not be expanded.
	
		Type Parameters:
			F: floaty
	
		fastexprand returns a random number from an exponential distribution with
		the specified mean.
	
		fastlog2 implements a fast approximation to the base 2 log of a
		float64. This is used to compute a geometric distribution for heap
		sampling, without introducing dependencies into package math. This
		uses a very rough approximation using the float64 exponent and the
		first 25 bits of the mantissa. The top 5 bits of the mantissa are
		used to load limits from a table of constants and the rest are used
		to scale linearly between them.
	
		fatal triggers a fatal error that dumps a stack trace and exits.
		
		fatal is equivalent to throw, but is used when user code is expected to be
		at fault for the failure, such as racing map writes.
		
		fatal does not include runtime frames, system goroutines, or frame metadata
		(fp, sp, pc) in the stack trace unless GOTRACEBACK=system or higher.
	
		fatalpanic implements an unrecoverable panic. It is like fatalthrow, except
		that if msgs != nil, fatalpanic also prints panic messages and decrements
		runningPanicDefers once main is blocked from exiting.
	
		fatalthrow implements an unrecoverable runtime throw. It freezes the
		system, prints stack traces starting from its caller, and terminates the
		process.
	
		fillAligned returns x but with all zeroes in m-aligned
		groups of m bits set to 1 if any bit in the group is non-zero.
		
		For example, fillAligned(0x0100a3, 8) == 0xff00ff.
		
		Note that if m == 1, this is a no-op.
		
		m must be a power of 2 <= maxPagesPerPhysPage.
	
		findBitRange64 returns the bit index of the first set of
		n consecutive 1 bits. If no consecutive set of 1 bits of
		size n may be found in c, then it returns an integer >= 64.
		n must be > 0.
	
		findfunc looks up function metadata for a PC.
		
		It is nosplit because it's part of the isgoexception
		implementation.
		
		findfunc should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/phuslu/log
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		findmoduledatap looks up the moduledata for a PC.
		
		It is nosplit because it's part of the isgoexception
		implementation.
	
		findObject returns the base address for the heap object containing
		the address p, the object's span, and the index of the object in s.
		If p does not point into a heap object, it returns base == 0.
		
		If p points is an invalid heap pointer and debug.invalidptr != 0,
		findObject panics.
		
		refBase and refOff optionally give the base address of the object
		in which the pointer p was found and the byte offset at which it
		was found. These are used for error reporting.
		
		It is nosplit so it is safe for p to be a pointer to the current goroutine's stack.
		Since p is a uintptr, it would not be adjusted if the stack were to move.
		
		findObject should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		Finds a runnable goroutine to execute.
		Tries to steal from other P's, get g from local or global queue, poll network.
		tryWakeP indicates that the returned goroutine is not normal (GC worker, trace
		reader) so the caller should try to wake a P.
	
		finishsweep_m ensures that all spans are swept.
		
		The world must be stopped. This ensures there are no sweeps in
		progress.
	 func fips_fatal(s string)	 func fips_setIndicator(indicator uint8)	
		float64bits returns the IEEE 754 binary representation of f.
	
		float64frombits returns the floating point number corresponding
		the IEEE 754 binary representation b.
	
		flushallmcaches flushes the mcaches of all Ps.
		
		The world must be stopped.
	
		flushmcache flushes the mcache of allp[i].
		
		The world must be stopped.
	
		Type Parameters:
			F: floaty
	
		Type Parameters:
			F: floaty
	
		fmtNSAsMS nicely formats ns nanoseconds as milliseconds.
	
		Type Parameters:
			F: floaty
	
		forEachG calls fn on every G from allgs.
		
		forEachG takes a lock to exclude concurrent addition of new Gs.
	
		forEachGRace calls fn on every G from allgs.
		
		forEachGRace avoids locking, but does not exclude addition of new Gs during
		execution, which may be missed.
	
		forEachP calls fn(p) for every P p when p reaches a GC safe point.
		If a P is currently executing code, this will bring the P to a GC
		safe point and execute fn on that P. If the P is not executing code
		(it is idle or in a syscall), this will call fn(p) directly while
		preventing the P from exiting its state. This does not ensure that
		fn will run on every CPU executing Go code, but it acts as a global
		memory barrier. GC uses this as a "ragged barrier."
		
		The caller must hold worldsema. fn must not refer to any
		part of the current goroutine's stack, since the GC may move it.
	
		forEachPInternal calls fn(p) for every P p when p reaches a GC safe point.
		It is the internal implementation of forEachP.
		
		The caller must hold worldsema and either must ensure that a GC is not
		running (otherwise this may deadlock with the GC trying to preempt this P)
		or it must leave its goroutine in a preemptible state before it switches
		to the systemstack. Due to these restrictions, prefer forEachP when possible.
	
		fpTracebackPartialExpand records a call stack obtained starting from fp.
		This function will skip the given number of frames, properly accounting for
		inlining, and save remaining frames as "physical" return addresses. The
		consumer should later use CallersFrames or similar to expand inline frames.
	
		fpTracebackPCs populates pcBuf with the return addresses for each frame and
		returns the number of PCs written to pcBuf. The returned PCs correspond to
		"physical frames" rather than "logical frames"; that is if A is inlined into
		B, this will return a PC for only B.
	
		fpunwindExpand expands a call stack from pcBuf into dst,
		returning the number of PCs written to dst.
		pcBuf and dst should not overlap.
		
		fpunwindExpand checks if pcBuf contains logical frames (which include inlined
		frames) or physical frames (produced by frame pointer unwinding) using a
		sentinel value in pcBuf[0]. Logical frames are simply returned without the
		sentinel. Physical frames are turned into logical frames via inline unwinding
		and by applying the skip value that's stored in pcBuf[0].
	
		freemcache releases resources associated with this
		mcache and puts the object onto a free list.
		
		In some cases there is no way to simply release
		resources, such as statistics, so donate them to
		a different mcache (the recipient).
	
		freeSomeWbufs frees some workbufs back to the heap and returns
		true if it should be called again to free more.
	
		freeSpecial performs any cleanup on special s and deallocates it.
		s must already be unlinked from the specials list.
	
		freeStackSpans frees unused stack spans at the end of GC.
	
		freeUserArenaChunk releases the user arena represented by s back to the runtime.
		
		x must be a live pointer within s.
		
		The runtime will set the user arena to fault once it's safe (the GC is no longer running)
		and then once the user arena is no longer referenced by the application, will allow it to
		be reused.
	
		Similar to stopTheWorld but best-effort and can be called several times.
		There is no reverse operation, used during crashing.
		This function must not lock any mutexes.
	
		full reports whether a send on c would block (that is, the channel is full).
		It uses a single word-sized read of mutable state, so although
		the answer is instantaneously true, the correct answer may have changed
		by the time the calling function receives the return value.
	
		funcdata returns a pointer to the ith funcdata for f.
		funcdata should be kept in sync with cmd/link:writeFuncs.
	
		funcline1 should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/phuslu/log
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		funcMaxSPDelta returns the maximum spdelta at any point in f.
	
		funcNameForPrint returns the function name for printing to the user.
	
		funcNamePiecesForPrint returns the function name for printing to the user.
		It returns three pieces so it doesn't need an allocation for string
		concatenation.
	 func funcspdelta(f funcInfo, targetpc uintptr) int32	
		Atomically,
		
			if(*addr == val) sleep
		
		Might be woken up spuriously; that's allowed.
		Don't sleep longer than ns; ns < 0 means forever.
	
		If any procs are sleeping on addr, wake up at most cnt.
	
		gcAssistAlloc performs GC work to make gp's assist debt positive.
		gp must be the calling user goroutine.
		
		This must be called with preemption enabled.
	
		gcAssistAlloc1 is the part of gcAssistAlloc that runs on the system
		stack. This is a separate function to make it easier to see that
		we're not capturing anything from the user stack, since the user
		stack may move while we're in this function.
		
		gcAssistAlloc1 indicates whether this assist completed the mark
		phase by setting gp.param to non-nil. This can't be communicated on
		the stack since it may move.
	
		gcBgMarkPrepare sets up state for background marking.
		Mutator assists must not yet be enabled.
	
		gcBgMarkStartWorkers prepares background mark worker goroutines. These
		goroutines will not run until the mark phase, but they must be started while
		the work is not stopped and from a regular G stack. The caller must hold
		worldsema.
	 func gcBgMarkWorker(ready chan struct{})	
		gcControllerCommit is gcController.commit, but passes arguments from live
		(non-test) data. It also updates any consumers of the GC pacing, such as
		sweep pacing and the background scavenger.
		
		Calls gcController.commit.
		
		The heap lock must be held, so this must be executed on the system stack.
	
		gcDrain scans roots and objects in work buffers, blackening grey
		objects until it is unable to get more work. It may return before
		GC is done; it's the caller's responsibility to balance work from
		other Ps.
		
		If flags&gcDrainUntilPreempt != 0, gcDrain returns when g.preempt
		is set.
		
		If flags&gcDrainIdle != 0, gcDrain returns when there is other work
		to do.
		
		If flags&gcDrainFractional != 0, gcDrain self-preempts when
		pollFractionalWorkerExit() returns true. This implies
		gcDrainNoBlock.
		
		If flags&gcDrainFlushBgCredit != 0, gcDrain flushes scan work
		credit to gcController.bgScanCredit every gcCreditSlack units of
		scan work.
		
		gcDrain will always return if there is a pending STW or forEachP.
		
		Disabling write barriers is necessary to ensure that after we've
		confirmed that we've drained gcw, that we don't accidentally end
		up flipping that condition by immediately adding work in the form
		of a write barrier buffer flush.
		
		Don't set nowritebarrierrec because it's safe for some callees to
		have write barriers enabled.
	
		gcDrainMarkWorkerDedicated is a wrapper for gcDrain that exists to better account
		mark time in profiles.
	
		gcDrainMarkWorkerFractional is a wrapper for gcDrain that exists to better account
		mark time in profiles.
	
		gcDrainMarkWorkerIdle is a wrapper for gcDrain that exists to better account
		mark time in profiles.
	
		gcDrainN blackens grey objects until it has performed roughly
		scanWork units of scan work or the G is preempted. This is
		best-effort, so it may perform less work if it fails to get a work
		buffer. Otherwise, it will perform at least n units of work, but
		may perform more because scanning is always done in whole object
		increments. It returns the amount of scan work performed.
		
		The caller goroutine must be in a preemptible state (e.g.,
		_Gwaiting) to prevent deadlocks during stack scanning. As a
		consequence, this must be called on the system stack.
	
		gcDumpObject dumps the contents of obj for debugging and marks the
		field at byte offset off in obj.
	
		gcenable is called after the bulk of the runtime initialization,
		just before we're about to start letting user code run.
		It kicks off the background sweeper goroutine, the background
		scavenger goroutine, and enables GC.
	
		gcFlushBgCredit flushes scanWork units of background scan work
		credit. This first satisfies blocked assists on the
		work.assistQueue and then flushes any remaining credit to
		gcController.bgScanCredit.
		
		Write barriers are disallowed because this is used by gcDrain after
		it has ensured that all work is drained and this must preserve that
		condition.
	
		gcMark runs the mark (or, for concurrent GC, mark termination)
		All gcWork caches must be empty.
		STW is in effect at this point.
	
		gcMarkDone transitions the GC from mark to mark termination if all
		reachable objects have been marked (that is, there are no grey
		objects and can be no more in the future). Otherwise, it flushes
		all local work to the global queues where it can be discovered by
		other workers.
		
		This should be called when all local mark work has been drained and
		there are no remaining workers. Specifically, when
		
			work.nwait == work.nproc && !gcMarkWorkAvailable(p)
		
		The calling context must be preemptible.
		
		Flushing local work is important because idle Ps may have local
		work queued. This is the only way to make that work visible and
		drive GC to completion.
		
		It is explicitly okay to have write barriers in this function. If
		it does transition to mark termination, then all reachable objects
		have been marked, so the write barrier cannot shade any more
		objects.
	
		gcmarknewobject marks a newly allocated object black. obj must
		not contain any non-nil pointers.
		
		This is nosplit so it can manipulate a gcWork without preemption.
	
		gcMarkRootCheck checks that all roots have been scanned. It is
		purely for debugging.
	
		gcMarkRootPrepare queues root scanning jobs (stacks, globals, and
		some miscellany) and initializes scanning-related state.
		
		The world must be stopped.
	
		World must be stopped and mark assists and background workers must be
		disabled.
	
		gcMarkTinyAllocs greys all active tiny alloc blocks.
		
		The world must be stopped.
	
		gcMarkWorkAvailable reports whether executing a mark worker
		on p is potentially useful. p may be nil, in which case it only
		checks the global sources of work.
	
		gcPaceScavenger updates the scavenger's pacing, particularly
		its rate and RSS goal. For this, it requires the current heapGoal,
		and the heapGoal for the previous GC cycle.
		
		The RSS goal is based on the current heap goal with a small overhead
		to accommodate non-determinism in the allocator.
		
		The pacing is based on scavengePageRate, which applies to both regular and
		huge pages. See that constant for more information.
		
		Must be called whenever GC pacing is updated.
		
		mheap_.lock must be held or the world must be stopped.
	
		gcPaceSweeper updates the sweeper's pacing parameters.
		
		Must be called whenever the GC's pacing is updated.
		
		The world must be stopped, or mheap_.lock must be held.
	
		gcParkAssist puts the current goroutine on the assist queue and parks.
		
		gcParkAssist reports whether the assist is now satisfied. If it
		returns false, the caller must retry the assist.
	
		gcParkStrongFromWeak puts the current goroutine on the weak->strong queue and parks.
	
		gcResetMarkState resets global state prior to marking (concurrent
		or STW) and resets the stack scan state of all Gs.
		
		This is safe to do without the world stopped because any Gs created
		during or after this will start out in the reset state.
		
		gcResetMarkState must be called on the system stack because it acquires
		the heap lock. See mheap for details.
	
		gcStart starts the GC. It transitions from _GCoff to _GCmark (if
		debug.gcstoptheworld == 0) or performs all of GC (if
		debug.gcstoptheworld != 0).
		
		This may return without performing this transition in some cases,
		such as when called on a system stack or with locks held.
	
		Stops the current m for stopTheWorld.
		Returns when the world is restarted.
	
		gcSweep must be called on the system stack because it acquires the heap
		lock. See mheap for details.
		
		Returns true if the heap was fully swept by this function.
		
		The world must be stopped.
	
		gcTestIsReachable performs a GC and returns a bit set where bit i
		is set if ptrs[i] is reachable.
	
		gcTestMoveStackOnNextCall causes the stack to be moved on a call
		immediately following the call to this. It may not work correctly
		if any other work appears after this call (such as returning).
		Typically the following call should be marked go:noinline so it
		performs a stack check.
		
		In rare cases this may not cause the stack to move, specifically if
		there's a preemption between this call and the next.
	
		gcTestPointerClass returns the category of what p points to, one of:
		"heap", "stack", "data", "bss", "other". This is useful for checking
		that a test is doing what it's intended to do.
		
		This is nosplit simply to avoid extra pointer shuffling that may
		complicate a test.
	
		gcWaitOnMark blocks until GC finishes the Nth mark phase. If GC has
		already completed this mark phase, it returns immediately.
	
		gcWakeAllAssists wakes all currently blocked assists. This is used
		at the end of a GC cycle. gcBlackenEnabled must be false to prevent
		new assists from going to sleep after this point.
	
		gcWakeAllStrongFromWeak wakes all currently blocked weak->strong
		conversions. This is used at the end of a GC cycle.
		
		work.strongFromWeak.block must be false to prevent woken goroutines
		from immediately going back to sleep.
	
		Called from compiled code; declared for vet; do NOT call from Go.
	
		gcWriteBarrier2 should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		Called from compiled code; declared for vet; do NOT call from Go.
	
		golang.org/x/sys/cpu uses getAuxv via linkname.
		Do not remove or change the type signature.
		(See go.dev/issue/57336.)
		
		getAuxv should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/cilium/ebpf
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		getcallerfp returns the frame pointer of the caller of the caller
		of this function.
	
		getempty pops an empty work buffer off the work.empty list,
		allocating new buffers if none are available.
	
		Return an M from the extra M list. Returns last == true if the list becomes
		empty because of this call.
		
		Spins waiting for an extra M, so caller must ensure that the list always
		contains or will soon contain at least one M.
	
		getfp returns the frame pointer register of its caller or 0 if not implemented.
		TODO: Make this a compiler intrinsic
	
		getg returns the pointer to the current g.
		The compiler rewrites calls to this function into instructions
		that fetch the g directly (from TLS or from the dedicated register).
	
		getGCMask returns the pointer/nonpointer bitmask for type t.
		
		nosplit because it is used during write barriers and must not be preempted.
	
		nosplit because it is used during write barriers and must not be preempted.
	
		getGodebugEarly extracts the environment variable GODEBUG from the environment on
		Unix-like operating systems and returns it. This function exists to extract GODEBUG
		early before much of the runtime is initialized.
	
		getitab should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	 func getLockRank(l *mutex) lockRank	
		A helper function for EnsureDropM.
		
		getm should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - fortio.org/log
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		getMCache is a convenience function which tries to obtain an mcache.
		
		Returns nil if we're not bootstrapping or we don't have a P. The caller's
		P must not change, so we must be in a non-preemptible state.
	
		Retrieves or creates a weak pointer handle for the object p.
	
		getStaticuint64s is called by the reflect package to get a pointer
		to the read-only array.
	
		Get from gfree list.
		If local list is empty, grab a batch from global list.
	
		Purge all cached G's from gfree list to the global list.
	
		Put on gfree list.
		If local list is too long, transfer a batch to the global list.
	
		Try get a batch of G's from the global runnable queue.
		sched.lock must be held.
	
		Put gp on the global runnable queue.
		sched.lock must be held.
		May run during STW, so write barriers are not allowed.
	
		Put a batch of runnable goroutines on the global runnable queue.
		This clears *batch.
		sched.lock must be held.
		May run during STW, so write barriers are not allowed.
	
		Put gp at the head of the global runnable queue.
		sched.lock must be held.
		May run during STW, so write barriers are not allowed.
	
		used by cmd/cgo
	 func godebug_registerMetric(name string, read func() uint64)	 func godebug_setNewIncNonDefault(newIncNonDefault func(string) func())	 func godebug_setUpdate(update func(string, string))	 func godebugNotify(envChanged bool)	
		goexit is the return stub at the top of every goroutine call stack.
		Each goroutine stack is constructed as if goexit called the
		goroutine's entry point function, so that when the entry point
		function returns, it will return to goexit, which will call goexit1
		to perform the actual exit.
		
		This function must never be called directly. Call goexit1 instead.
		gentraceback assumes that goexit terminates the stack. A direct
		call on the stack will cause gentraceback to stop walking the stack
		prematurely and if there is leftover state it may panic.
	
		goexit continuation on g0.
	
		Finishes execution of the current goroutine.
	
		The implementation of the predeclared function panic.
		The compiler emits calls to this function.
		
		gopanic should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - go.undefinedlabs.com/scopeagent
		  - github.com/goplus/igop
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		failures in the comparisons for s[x], 0 <= x < y (y == len(s))
	 func goPanicIndexU(x uint, y int)	 func goPanicSlice3Acap(x int, y int)	 func goPanicSlice3AcapU(x uint, y int)	
		failures in the comparisons for s[::x], 0 <= x <= y (y == len(s) or cap(s))
	 func goPanicSlice3AlenU(x uint, y int)	
		failures in the comparisons for s[:x:y], 0 <= x <= y
	 func goPanicSlice3BU(x uint, y int)	
		failures in the comparisons for s[x:y:], 0 <= x <= y
	 func goPanicSlice3CU(x uint, y int)	 func goPanicSliceAcap(x int, y int)	 func goPanicSliceAcapU(x uint, y int)	
		failures in the comparisons for s[:x], 0 <= x <= y (y == len(s) or cap(s))
	 func goPanicSliceAlenU(x uint, y int)	
		failures in the comparisons for s[x:y], 0 <= x <= y
	 func goPanicSliceBU(x uint, y int)	
		failures in the conversion ([x]T)(s) or (*[x]T)(s), 0 <= x <= y, y == len(s)
	
		Puts the current goroutine into a waiting state and calls unlockf on the
		system stack.
		
		If unlockf returns false, the goroutine is resumed.
		
		unlockf must not access this G's stack, as it may be moved between
		the call to gopark and the call to unlockf.
		
		Note that because unlockf is called after putting the G into a waiting
		state, the G may have already been readied by the time unlockf is called
		unless there is external synchronization preventing the G from being
		readied. If unlockf returns false, it must guarantee that the G cannot be
		externally readied.
		
		Reason explains why the goroutine has been parked. It is displayed in stack
		traces and heap dumps. Reasons should be unique and descriptive. Do not
		re-use reasons, add new ones.
		
		gopark should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gvisor.dev/gvisor
		  - github.com/sagernet/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		Puts the current goroutine into a waiting state and unlocks the lock.
		The goroutine can be made runnable again by calling goready(gp).
	 func gopreempt_m(gp *g)	
		goready should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gvisor.dev/gvisor
		  - github.com/sagernet/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		The implementation of the predeclared function recover.
		Cannot split the stack because it needs to reliably
		find the stack segment of its caller.
		
		TODO(rsc): Once we commit to CopyStackAlways,
		this doesn't need to be nosplit.
	 func goroutineheader(gp *g)	 func goroutineProfileInternal(p []profilerecord.StackRecord) (n int, ok bool)	
		labels may be nil. If labels is non-nil, it must have the same length as p.
	 func goroutineProfileWithLabelsConcurrent(p []profilerecord.StackRecord, labels []unsafe.Pointer) (n int, ok bool)	 func goroutineProfileWithLabelsSync(p []profilerecord.StackRecord, labels []unsafe.Pointer) (n int, ok bool)	
		Ready the goroutine arg.
	
		Gosched continuation on g0.
	
		goschedguarded yields the processor like gosched, but also checks
		for forbidden states and opts out of the yield in those cases.
	
		goschedguarded is a forbidden-states-avoided version of gosched_m.
	
		goschedIfBusy yields the processor like gosched, but only does so if
		there are no idle Ps or if we're on the only P and there's nothing in
		the run queue. In both cases, there is freely available idle time.
	 func goschedImpl(gp *g, preempted bool)	
		adjust Gobuf as if it executed a call to fn with context ctxt
		and then stopped before the first instruction in fn.
	
		adjust Gobuf as if it executed a call to fn
		and then stopped before the first instruction in fn.
	
		goStatusToTraceGoStatus translates the internal status to tracGoStatus.
		
		status must not be _Gdead or any status whose name has the suffix "_unused."
		
		nosplit because it's part of writing an event for an M, which must not
		have any stack growth.
	
		This is exported via linkname to assembly in syscall (for Plan9) and cgo.
	 func gostringnocopy(str *byte) string	
		gotraceback returns the current traceback settings.
		
		If level is 0, suppress all tracebacks.
		If level is 1, show tracebacks, but exclude runtime frames.
		If level is 2, show tracebacks including runtime frames.
		If all is set, print all goroutine stacks. Otherwise, print just the current goroutine.
		If crash is set, crash (core dump, etc) after tracebacking.
	
		goyield is like Gosched, but it:
		- emits a GoPreempt trace event instead of a GoSched trace event
		- puts the current G on the runq of the current P instead of the globrunq
		
		goyield should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gvisor.dev/gvisor
		  - github.com/sagernet/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		obj is the start of an object with mark mbits.
		If it isn't already marked, mark it and enqueue into gcw.
		base and off are for debugging only and could be removed.
		
		See also wbBufFlush1, which partially duplicates this logic.
	
		growslice allocates new backing store for a slice.
		
		arguments:
		
			oldPtr = pointer to the slice's backing array
			newLen = new length (= oldLen + num)
			oldCap = original slice's capacity.
			   num = number of elements being added
			    et = element type
		
		return values:
		
			newPtr = pointer to the new backing store
			newLen = same value as the argument
			newCap = capacity of the new backing store
		
		Requires that uint(newLen) > uint(oldCap).
		Assumes the original slice length is newLen - num
		
		A new backing store is allocated with space for at least newLen elements.
		Existing entries [0, oldLen) are copied over to the new backing store.
		Added entries [oldLen, newLen) are not initialized by growslice
		(although for pointer-containing element types, they are zeroed). They
		must be initialized by the caller.
		Trailing entries [newLen, newCap) are zeroed.
		
		growslice's odd calling convention makes the generated code that calls
		this function simpler. In particular, it accepts and returns the
		new length so that the old length is not live (does not need to be
		spilled/restored) and the new length is returned (also does not need
		to be spilled/restored).
		
		growslice should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		  - github.com/chenzhuoyu/iasm
		  - github.com/cloudwego/dynamicgo
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		write to goroutine-local buffer if diverting output,
		or else standard error.
	
		Hands off P from syscall or locked M.
		Always runs without a P, so write barriers are not allowed.
	
		heapBitsInSpan returns true if the size of an object implies its ptr/scalar
		data is stored at the end of the span, and is accessible via span.heapBits.
		
		Note: this works for both rounded-up sizes (span.elemsize) and unrounded
		type sizes because minSizeForMallocHeader is guaranteed to be at a size
		class boundary.
	
		Helper for constructing a slice for the span's heap bits.
	
		heapObjectsCanMove always returns false in the current garbage collector.
		It exists for go4.org/unsafe/assume-no-moving-gc, which is an
		unfortunate idea that had an even more unfortunate implementation.
		Every time a new Go release happened, the package stopped building,
		and the authors had to add a new file with a new //go:build line, and
		then the entire ecosystem of packages with that as a dependency had to
		explicitly update to the new version. Many packages depend on
		assume-no-moving-gc transitively, through paths like
		inet.af/netaddr -> go4.org/intern -> assume-no-moving-gc.
		This was causing a significant amount of friction around each new
		release, so we added this bool for the package to //go:linkname
		instead. The bool is still unfortunate, but it's not as bad as
		breaking the ecosystem on every new release.
		
		If the Go garbage collector ever does move heap objects, we can set
		this to true to break all the programs using assume-no-moving-gc.
	
		heapRetained returns an estimate of the current heap RSS.
	
		hexdumpWords prints a word-oriented hex dump of [p, end).
		
		If mark != nil, it will be called with each printed word's address
		and should return a character mark to appear just before that
		word's value. It can return 0 to indicate no mark.
	
		inf2one returns a signed 1 if f is an infinity and a signed 0 otherwise.
		The sign of the result is the sign of f.
	
		inheap reports whether b is a pointer into a (potentially dead) heap object.
		It returns false for pointers into mSpanManual spans.
		Non-preemptible because it is used by write barriers.
	
		inHeapOrStack is a variant of inheap that returns true for pointers
		into any allocated heap span.
	
		start forcegc helper goroutine
	
		initMetrics initializes the metrics map if it hasn't been yet.
		
		metricsSema must be held.
	
		Initialize signals.
		Called by libpreinit so runtime may not be initialized.
	
		injectglist adds each runnable G on the list to some run queue,
		and clears glist. If there is no current P, they are added to the
		global queue, and up to npidle M's are started to run them.
		Otherwise, for each idle P, this adds a G to the global queue
		and starts an M. Any remaining G's are added to the current P's
		local run queue.
		This may temporarily acquire sched.lock.
		Can run concurrently with GC.
	
		inPersistentAlloc reports whether p points to memory allocated by
		persistentalloc. This must be nosplit because it is called by the
		cgo checker code, which is called by the write barrier code.
	
		inRange reports whether v0 or v1 are in the range [r0, r1].
	 func interequal(p, q unsafe.Pointer) bool	
		interfaceSwitch compares t against the list of cases in s.
		If t matches case i, interfaceSwitch returns the case index i and
		an itab for the pair <t, s.Cases[i]>.
		If there is no match, return N,nil, where N is the number
		of cases.
	
		Active spinning for sync.Mutex.
	 func internal_sync_runtime_SemacquireMutex(addr *uint32, lifo bool, skipframes int)	 func internal_sync_runtime_Semrelease(addr *uint32, handoff bool, skipframes int)	
		internal_syscall_gostring is a version of gostring for internal/syscall/unix.
	
		inUserArenaChunk returns true if p points to a user arena chunk.
	
		vdsoMarker reports whether PC is on the VDSO page.
	
		isAbortPC reports whether pc is the program counter at which
		runtime.abort raises a signal.
		
		It is nosplit because it's part of the isgoexception
		implementation.
	
		isAsyncSafePoint reports whether gp at instruction PC is an
		asynchronous safe point. This indicates that:
		
		1. It's safe to suspend gp and conservatively scan its stack and
		registers. There are no potentially hidden pointer values and it's
		not in the middle of an atomic sequence like a write barrier.
		
		2. gp has enough stack space to inject the asyncPreempt call.
		
		3. It's generally safe to interact with the runtime, even if we're
		in a signal handler stopped here. For example, there are no runtime
		locks held, so acquiring a runtime lock won't self-deadlock.
		
		In some cases the PC is safe for asynchronous preemption but it
		also needs to adjust the resumption PC. The new PC is returned in
		the second result.
	
		isDirectIface reports whether t is stored directly in an interface value.
	
		isExportedRuntime reports whether name is an exported runtime function.
		It is only for runtime functions, so ASCII A-Z is fine.
	
		isFinite reports whether f is neither NaN nor an infinity.
	
		isInf reports whether f is an infinity.
	
		isNaN reports whether f is an IEEE 754 “not-a-number” value.
	
		isPinned checks if a Go pointer is pinned.
		nosplit, because it's called from nosplit code in cgocheck.
	
		isShrinkStackSafe returns whether it's safe to attempt to shrink
		gp's stack. Shrinking the stack is only safe when we have precise
		pointer maps for all frames on the stack. The caller must hold the
		_Gscan bit for gp or must be running gp itself.
	
		isSweepDone reports whether all spans are swept.
		
		Note that this condition may transition from false to true at any
		time as the sweeper runs. It may transition from true to false if a
		GC runs; to prevent that the caller must be non-preemptible or must
		somehow block GC progress.
	
		isSystemGoroutine reports whether the goroutine g must be omitted
		in stack dumps and deadlock detector. This is any goroutine that
		starts at a runtime.* entry point, except for runtime.main,
		runtime.handleAsyncEvent (wasm only) and sometimes runtime.runfinq.
		
		If fixed is true, any goroutine that can vary between user and
		system (that is, the finalizer goroutine) is considered a user
		goroutine.
	 func itab_callback(tab *itab)	
		itabAdd adds the given itab to the itab hash table.
		itabLock must be held.
	 func itabHashFunc(inter *interfacetype, typ *_type) uintptr	
		itabInit fills in the m.Fun array with all the code pointers for
		the m.Inter/m.Type pair. If the type does not implement the interface,
		it sets m.Fun[0] to 0 and returns the name of an interface function that is missing.
		If !firstTime, itabInit will not write anything to m.Fun (see issue 65962).
		It is ok to call this multiple times on the same m, even concurrently
		(although it will only be called once with firstTime==true).
	 func iterate_itabs(fn func(*itab))	
		itoa converts val to a decimal representation. The result is
		written somewhere within buf and the location of the result is returned.
		buf must be at least 20 bytes.
	
		itoaDiv formats val/(10**dec) into buf.
	
		We use the uintptr mutex.key and note.key as a uint32.
	
		keys for implementing maps.keys
	
		less checks if a < b, considering a & b running counts that may overflow the
		32-bit range, and that their "unwrapped" difference is always less than 2^31.
	
		levelIndexToOffAddr converts an index into summary[level] into
		the corresponding address in the offset address space.
	
		lfnodeValidate panics if node is not a valid address for use with
		lfstack.push. This only needs to be called when node is allocated.
	 func lfstackPack(node *lfnode, cnt uintptr) uint64	 func lfstackUnpack(val uint64) *lfnode	
		Called to do synchronous initialization of Go code built with
		-buildmode=c-archive or -buildmode=c-shared.
		None of the Go runtime is initialized.
	
		lockextra locks the extra list and returns the list head.
		The caller must unlock the list by storing a new list head
		to extram. If nilokay is true, then lockextra will
		return a nil list head if that's what it finds. If nilokay is false,
		lockextra will keep waiting until the list head is no longer nil.
	
		lockRankMayQueueFinalizer records the lock ranking effects of a
		function that may call queuefinalizer.
	
		lockRankMayTraceFlush records the lock ranking effects of a
		potential call to traceFlush.
		
		nosplit because traceAcquire is nosplit.
	
		lockVerifyMSize confirms that we can recreate the low bits of the M pointer.
	 func lockWithRank(l *mutex, rank lockRank)	
		This function may be called in nosplit context and thus must be nosplit.
	 func lowerASCII(c byte) byte	
		return value is only set on linux to be used in osinit().
	
		The main goroutine.
	
		makeAddrRange creates a new address range from two virtual addresses.
		
		Throws if the base and limit are not in the same memory segment.
	 func makechan64(t *chantype, size int64) *hchan	
		makeHeadTailIndex creates a headTailIndex value from a separate
		head and tail.
	 func makeheapobjbv(p uintptr, size uintptr) bitvector	
		makeLimiterEventStamp creates a new stamp from the event type and the current timestamp.
	
		makemap implements Go map creation for make(map[k]v, hint).
		If the compiler has determined that the map or the first group
		can be created on the stack, m and optionally m.dirPtr may be non-nil.
		If m != nil, the map can be created directly in m.
		If m.dirPtr != nil, it points to a group usable for a small map.
		
		makemap should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		makemap_small implements Go map creation for make(map[k]v) and
		make(map[k]v, hint) when hint is known to be at most abi.SwissMapGroupSlots
		at compile time and the map needs to be allocated on the heap.
		
		makemap_small should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		makeProfStack returns a buffer large enough to hold a maximum-sized stack
		trace.
	
		makeProfStackFP creates a buffer large enough to hold a maximum-sized stack
		trace as well as any additional frames needed for frame pointer unwinding
		with delayed inline expansion.
	
		makeslice should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		makeslicecopy allocates a slice of "tolen" elements of type "et",
		then copies "fromlen" elements of type "et" into that new allocation from "from".
	 func makeSpanClass(sizeclass uint8, noscan bool) spanClass	
		makeStatDepSet creates a new statDepSet from a list of statDeps.
	
		makeTraceFrame sets up a traceFrame for a frame.
	
		makeTraceFrames returns the frames corresponding to pcs. It may
		allocate and may emit trace events.
	
		Allocate a new g, with a stack big enough for stacksize bytes.
	
		Allocate an object of size bytes.
		Small objects are allocated from the per-P cache's free lists.
		Large objects (> 32 kB) are allocated straight from the heap.
		
		mallocgc should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/gopkg
		  - github.com/bytedance/sonic
		  - github.com/cloudwego/frugal
		  - github.com/cockroachdb/cockroach
		  - github.com/cockroachdb/pebble
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		mapaccess1 returns a pointer to h[key].  Never returns nil, instead
		it will return a reference to the zero object for the elem type if
		the key is not in the map.
		NOTE: The returned pointer may keep the whole map live, so don't
		hold onto it for very long.
		
		mapaccess1 is pushed from internal/runtime/maps. We could just call it, but
		we want to avoid one layer of call.
	 func mapaccess1_fast32(t *abi.SwissMapType, m *maps.Map, key uint32) unsafe.Pointer	 func mapaccess1_fast64(t *abi.SwissMapType, m *maps.Map, key uint64) unsafe.Pointer	 func mapaccess1_faststr(t *abi.SwissMapType, m *maps.Map, ky string) unsafe.Pointer	 func mapaccess1_fat(t *abi.SwissMapType, m *maps.Map, key, zero unsafe.Pointer) unsafe.Pointer	
		mapaccess2 should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		mapaccess2_fast32 should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		mapaccess2_fast64 should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		mapaccess2_faststr should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	 func mapaccess2_fat(t *abi.SwissMapType, m *maps.Map, key, zero unsafe.Pointer) (unsafe.Pointer, bool)	
		mapassign is pushed from internal/runtime/maps. We could just call it, but
		we want to avoid one layer of call.
		
		mapassign should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		  - github.com/RomiChan/protobuf
		  - github.com/segmentio/encoding
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		mapassign_fast32 should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		mapassign_fast32ptr should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		mapassign_fast64 should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		mapassign_fast64ptr should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		mapassign_faststr should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		mapclear deletes all keys from a map.
	
		mapclone for implementing maps.Clone
	
		mapdelete should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	 func mapdelete_fast32(t *abi.SwissMapType, m *maps.Map, key uint32)	 func mapdelete_fast64(t *abi.SwissMapType, m *maps.Map, key uint64)	 func mapdelete_faststr(t *abi.SwissMapType, m *maps.Map, ky string)	
		mapinitnoop is a no-op function known the Go linker; if a given global
		map (of the right size) is determined to be dead, the linker will
		rewrite the relocation (from the package init func) from the outlined
		map init function to this symbol. Defined in assembly so as to avoid
		complications with instrumentation (coverage, etc).
	
		mapiterinit is a compatibility wrapper for map iterator for users of
		//go:linkname from before Go 1.24. It is not used by Go itself. New users
		should use reflect or the maps package.
		
		mapiterinit should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		  - github.com/goccy/go-json
		  - github.com/RomiChan/protobuf
		  - github.com/segmentio/encoding
		  - github.com/ugorji/go/codec
		  - github.com/wI2L/jettison
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		mapiternext is a compatibility wrapper for map iterator for users of
		//go:linkname from before Go 1.24. It is not used by Go itself. New users
		should use reflect or the maps package.
		
		mapiternext should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		  - github.com/RomiChan/protobuf
		  - github.com/segmentio/encoding
		  - github.com/ugorji/go/codec
		  - gonum.org/v1/gonum
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		mapIterNext performs the next step of iteration. Afterwards, the next
		key/elem are in it.Key()/it.Elem().
	
		mapIterStart initializes the Iter struct used for ranging over maps and
		performs the first step of iteration. The Iter struct pointed to by 'it' is
		allocated on the stack by the compilers order pass or on the heap by
		reflect. Both need to have zeroed it since the struct contains pointers.
	 func maps_fatal(s string)	 func maps_newobject(typ *_type) unsafe.Pointer	 func maps_typedmemclr(typ *_type, ptr unsafe.Pointer)	 func maps_typedmemmove(typ *_type, dst, src unsafe.Pointer)	
		markBitsForSpan returns the markBits for the span base address base.
	
		markroot scans the i'th root.
		
		Preemption must be disabled (because this uses a gcWork).
		
		Returns the amount of GC work credit produced by the operation.
		If flushBgCredit is true, then that credit is also flushed
		to the background credit pool.
		
		nowritebarrier is only advisory here.
	
		markrootBlock scans the shard'th shard of the block of memory [b0,
		b0+n0), with the given pointer mask.
		
		Returns the amount of work done.
	
		markrootFreeGStacks frees stacks of dead Gs.
		
		This does not free stacks of dead Gs cached on Ps, but having a few
		cached stacks around isn't a problem.
	
		markrootSpans marks roots for one shard of markArenas.
	
		maxSearchAddr returns the maximum searchAddr value, which indicates
		that the heap has no free space.
		
		This function exists just to make it clear that this is the maximum address
		for the page allocator's search space. See maxOffAddr for details.
		
		It's a function (rather than a variable) because it needs to be
		usable before package runtime's dynamic initialization is complete.
		See #51913 for details.
	
		mayMoreStackMove is a maymorestack hook that forces stack movement
		at every possible point.
		
		See mayMoreStackPreempt.
	
		mayMoreStackPreempt is a maymorestack hook that forces a preemption
		at every possible cooperative preemption point.
		
		This is valuable to apply to the runtime, which can be sensitive to
		preemption points. To apply this to all preemption points in the
		runtime and runtime-like code, use the following in bash or zsh:
		
			X=(-{gc,asm}flags={runtime/...,reflect,sync}=-d=maymorestack=runtime.mayMoreStackPreempt) GOFLAGS=${X[@]}
		
		This must be deeply nosplit because it is called from a function
		prologue before the stack is set up and because the compiler will
		call it from any splittable prologue (leading to infinite
		recursion).
		
		Ideally it should also use very little stack because the linker
		doesn't currently account for this in nosplit stack depth checking.
		
		Ensure mayMoreStackPreempt can be called for all ABIs.
	
		mcall switches from the g to the g0 stack and invokes fn(g),
		where g is the goroutine that made the call.
		mcall saves g's current PC/SP in g->sched so that it can be restored later.
		It is up to fn to arrange for that later execution, typically by recording
		g in a data structure, causing something to call ready(g) later.
		mcall returns to the original goroutine g later, when g has been rescheduled.
		fn must not return at all; typically it ends by calling schedule, to let the m
		run other goroutines.
		
		mcall can only be called from g stacks (not g0, not gsignal).
		
		This must NOT be go:noescape: if fn is a stack-allocated closure,
		fn puts g on a run queue, and g executes before fn returns, the
		closure will be invalidated while it is still executing.
	
		Pre-allocated ID may be passed as 'id', or omitted by passing -1.
	
		Called from mexit, but not from dropm, to undo the effect of thread-owned
		resources in minit, semacreate, or elsewhere. Do not take locks after calling this.
		
		This always runs without a P, so //go:nowritebarrierrec is required.
	
		memclrHasPointers clears n bytes of typed memory starting at ptr.
		The caller must ensure that the type of the object at ptr has
		pointers, usually by checking typ.PtrBytes. However, ptr
		does not have to point to the start of the allocation.
		
		memclrHasPointers should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		memclrNoHeapPointers clears n bytes starting at ptr.
		
		Usually you should use typedmemclr. memclrNoHeapPointers should be
		used only when the caller knows that *ptr contains no heap pointers
		because either:
		
		*ptr is initialized memory and its type is pointer-free, or
		
		*ptr is uninitialized memory (e.g., memory that's being reused
		for a new allocation) and hence contains only "junk".
		
		memclrNoHeapPointers ensures that if ptr is pointer-aligned, and n
		is a multiple of the pointer size, then any pointer-aligned,
		pointer-sized portion is cleared atomically. Despite the function
		name, this is necessary because this function is the underlying
		implementation of typedmemclr and memclrHasPointers. See the doc of
		memmove for more details.
		
		The (CPU-specific) implementations of this function are in memclr_*.s.
		
		memclrNoHeapPointers should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		  - github.com/chenzhuoyu/iasm
		  - github.com/dgraph-io/ristretto
		  - github.com/outcaste-io/ristretto
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		memclrNoHeapPointersChunked repeatedly calls memclrNoHeapPointers
		on chunks of the buffer to be zeroed, with opportunities for preemption
		along the way.  memclrNoHeapPointers contains no safepoints and also
		cannot be preemptively scheduled, so this provides a still-efficient
		block copy that can also be preempted on a reasonable granularity.
		
		Use this with care; if the data being cleared is tagged to contain
		pointers, this allows the GC to run before it is all cleared.
	
		in internal/bytealg/equal_*.s
		
		memequal should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	 func memequal128(p, q unsafe.Pointer) bool	 func memequal16(p, q unsafe.Pointer) bool	 func memequal32(p, q unsafe.Pointer) bool	 func memequal64(p, q unsafe.Pointer) bool	 func memequal_varlen(a, b unsafe.Pointer) bool	
		memhash should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/aacfactory/fns
		  - github.com/dgraph-io/ristretto
		  - github.com/minio/simdjson-go
		  - github.com/nbd-wtf/go-nostr
		  - github.com/outcaste-io/ristretto
		  - github.com/puzpuzpuz/xsync/v2
		  - github.com/puzpuzpuz/xsync/v3
		  - github.com/authzed/spicedb
		  - github.com/pingcap/badger
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		memmove copies n bytes from "from" to "to".
		
		memmove ensures that any pointer in "from" is written to "to" with
		an indivisible write, so that racy reads cannot observe a
		half-written pointer. This is necessary to prevent the garbage
		collector from observing invalid pointers, and differs from memmove
		in unmanaged languages. However, memmove is only required to do
		this if "from" and "to" may contain pointers, which can only be the
		case if "from", "to", and "n" are all be word-aligned.
		
		Implementations are in memmove_*.s.
		
		Outside assembly calls memmove.
		
		memmove should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		  - github.com/cloudwego/dynamicgo
		  - github.com/ebitengine/purego
		  - github.com/tetratelabs/wazero
		  - github.com/ugorji/go/codec
		  - gvisor.dev/gvisor
		  - github.com/sagernet/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		memProfileInternal returns the number of records n in the profile. If there
		are less than size records, copyFn is invoked for each record, and ok returns
		true.
		
		The linker set disableMemoryProfiling to true to disable memory profiling
		if this function is not reachable. Mark it noinline to ensure the symbol exists.
		(This function is big and normally not inlined anyway.)
		See also disableMemoryProfiling above and cmd/link/internal/ld/lib.go:linksetup.
	
		mergeSummaries merges consecutive summaries which may each represent at
		most 1 << logMaxPagesPerSum pages each together into one.
	
		mexit tears down and exits the current thread.
		
		Don't call this directly to exit the thread, since it must run at
		the top of the thread stack. Instead, use gogo(&gp.m.g0.sched) to
		unwind the stack to the point that exits the thread.
		
		It is entered with m.p != nil, so write barriers are allowed. It
		will release the P before exiting.
	
		Try to get an m from midle list.
		sched.lock must be held.
		May run during STW, so write barriers are not allowed.
	
		Called to initialize a new m (including the bootstrap m).
		Called on the new thread, cannot allocate memory.
	
		minitSignalMask is called when initializing a new m to set the
		thread's signal mask. When this is called all signals have been
		blocked for the thread.  This starts with m.sigmask, which was set
		either from initSigmask for a newly created thread or by calling
		sigsave if this is a non-Go thread calling a Go function. It
		removes all essential signals from the mask, thus causing those
		signals to not be blocked. Then it sets the thread's signal mask.
		After this is called the thread can receive signals.
	
		minitSignals is called when initializing a new m to set the
		thread's alternate signal stack and signal mask.
	
		minitSignalStack is called when initializing a new m to set the
		alternate signal stack. If the alternate signal stack is not set
		for the thread (the normal case) then set the alternate signal
		stack to the gsignal stack. If the alternate signal stack is set
		for the thread (the case when a non-Go thread sets the alternate
		signal stack and then calls a Go function) then set the gsignal
		stack to the alternate signal stack. We also set the alternate
		signal stack to the gsignal stack if cgo is not used (regardless
		of whether it is already set). Record which choice was made in
		newSigstack, so that it can be undone in unminit.
	
		mmap is used to route the mmap system call through C code when using cgo, to
		support sanitizer interceptors. Don't allow stack splits, since this function
		(used by sysAlloc) is called in a lot of low-level parts of the runtime and
		callers often assume it won't acquire any locks.
	
		moduledataverify1 should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issues/67401.
		See go.dev/issues/71672.
	
		modulesinit creates the active modules slice out of all loaded modules.
		
		When a module is first loaded by the dynamic linker, an .init_array
		function (written by cmd/link) is invoked to call addmoduledata,
		appending to the module to the linked list that starts with
		firstmoduledata.
		
		There are two times this can happen in the lifecycle of a Go
		program. First, if compiled with -linkshared, a number of modules
		built with -buildmode=shared can be loaded at program initialization.
		Second, a Go program can load a module while running that was built
		with -buildmode=plugin.
		
		After loading, this function is called which initializes the
		moduledata so it is usable by the GC and creates a new activeModules
		list.
		
		Only one goroutine may call modulesinit at a time.
	
		morestack_noctxt should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issues/67401.
		See go.dev/issues/71672.
	
		This is exported as ABI0 via linkname so obj can call it.
	
		mPark causes a thread to park itself, returning once woken.
	
		Called to initialize a new m (including the bootstrap m).
		Called on the parent thread (main thread in case of bootstrap), can allocate memory.
	
		mProf_Flush flushes the events from the current heap profiling
		cycle into the active profile. After this it is safe to start a new
		heap profiling cycle with mProf_NextCycle.
		
		This is called by GC after mark termination starts the world. In
		contrast with mProf_NextCycle, this is somewhat expensive, but safe
		to do concurrently.
	
		mProf_FlushLocked flushes the events from the heap profiling cycle at index
		into the active profile. The caller must hold the lock for the active profile
		(profMemActiveLock) and for the profiling cycle at index
		(profMemFutureLock[index]).
	
		Called when freeing a profiled block.
	
		Called by malloc to record a profiled block.
	
		mProf_NextCycle publishes the next heap profile cycle and creates a
		fresh heap profile cycle. This operation is fast and can be done
		during STW. The caller must call mProf_Flush before calling
		mProf_NextCycle again.
		
		This is called by mark termination during STW so allocations and
		frees after the world is started again count towards a new heap
		profiling cycle.
	
		mProf_PostSweep records that all sweep frees for this GC cycle have
		completed. This has the effect of publishing the heap profile
		snapshot as of the last mark termination without advancing the heap
		profile cycle.
	
		mProfStackInit is used to eagerly initialize stack trace buffers for
		profiling. Lazy allocation would have to deal with reentrancy issues in
		malloc and runtime locks for mLockProfile.
		TODO(mknyszek): Implement lazy allocation if this becomes a problem.
	
		Put mp on midle list.
		sched.lock must be held.
		May run during STW, so write barriers are not allowed.
	
		mrandinit initializes the random state of an m.
	
		mReserveID returns the next ID to use for a new m. This new m is immediately
		considered 'running' by checkdead.
		
		sched.lock must be held.
	 func msanmalloc(addr unsafe.Pointer, sz uintptr)	
		msigrestore sets the current thread's signal mask to sigmask.
		This is used to restore the non-Go signal mask when a non-Go thread
		calls a Go function.
		This is nosplit and nowritebarrierrec because it is called by dropm
		after g has been cleared.
	
		mStackIsSystemAllocated indicates whether this runtime starts on a
		system-allocated stack.
	
		mstart is the entry-point for new Ms.
		It is written in assembly, uses ABI0, is marked TOPFRAME, and calls mstart0.
	
		mstart0 is the Go entry-point for new Ms.
		This must not split the stack because we may not even have stack
		bounds set up yet.
		
		May run during STW (because it doesn't have a P yet), so write
		barriers are not allowed.
	
		The go:noinline is to guarantee the sys.GetCallerPC/sys.GetCallerSP below are safe,
		so that we can set up g0.sched to return to the call of mstart1 above.
	
		mstartm0 implements part of mstart1 that only runs on the m0.
		
		Write barriers are allowed here because we know the GC can't be
		running yet, so they'll be no-ops.
	
		64x64 -> 128 multiply.
		adapted from hacker's delight.
	 func mutexContended(l *mutex) bool	 func mutexevent(cycles int64, skip int)	
		mutexPreferLowLatency reports if this mutex prefers low latency at the risk
		of performance collapse. If so, we can allow all waiting threads to spin on
		the state word rather than go to sleep.
		
		TODO: We could have the waiting Ms each spin on their own private cache line,
		especially if we can put a bound on the on-CPU time that would consume.
		
		TODO: If there's a small set of mutex values with special requirements, they
		could make use of a more specialized lock2/unlock2 implementation. Otherwise,
		we're constrained to what we can fit within a single uintptr with no
		additional storage on the M for each lock held.
	
		mutexProfileInternal returns the number of records n in the profile. If there
		are less than size records, copyFn is invoked for each record, and ok returns
		true.
	
		mutexWaitListHead recovers a full muintptr that was missing its low bits.
		With the exception of the static m0 value, it requires allocating runtime.m
		values in a size class with a particular minimum alignment. The 2048-byte
		size class allows recovering the full muintptr value even after overwriting
		the low 11 bits with flags. We can use those 11 bits as 3 flags and an
		atomically-swapped byte.
	
		Exported via linkname for use by time and internal/poll.
		
		Many external packages also linkname nanotime for a fast monotonic time.
		Such code should be updated to use:
		
			var start = time.Now() // at init time
		
		and then replace nanotime() with time.Since(start), which is equally fast.
		
		However, all the code linknaming nanotime is never going to go away.
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		Acquire an extra m and bind it to the C thread when a pthread key has been created.
	
		needm is called when a cgo callback happens on a
		thread without an m (a thread not created by Go).
		In this case, needm is expected to find an m to use
		and return with m, g initialized correctly.
		Since m and g are not set now (likely nil, but see below)
		needm is limited in what routines it can call. In particular
		it can only call nosplit functions (textflag 7) and cannot
		do any scheduling that requires an m.
		
		In order to avoid needing heavy lifting here, we adopt
		the following strategy: there is a stack of available m's
		that can be stolen. Using compare-and-swap
		to pop from the stack has ABA races, so we simulate
		a lock by doing an exchange (via Casuintptr) to steal the stack
		head and replace the top pointer with MLOCKED (1).
		This serves as a simple spin lock that we can use even
		without an m. The thread that locks the stack in this way
		unlocks the stack by storing a valid stack head pointer.
		
		In order to make sure that there is always an m structure
		available to be stolen, we maintain the invariant that there
		is always one more than needed. At the beginning of the
		program (if cgo is in use) the list is seeded with a single m.
		If needm finds that it has taken the last m off the list, its job
		is - once it has installed its own m so that it can do things like
		allocate memory - to create a spare m and put it on the list.
		
		Each of these extra m's also has a g0 and a curg that are
		pressed into service as the scheduling stack and current
		goroutine for the duration of the cgo callback.
		
		It calls dropm to put the m back on the list,
		1. when the callback is done with the m in non-pthread platforms,
		2. or when the C thread exiting on pthread platforms.
		
		The signal argument indicates whether we're called from a signal
		handler.
	
		netpoll checks for ready network connections.
		Returns a list of goroutines that become runnable,
		and a delta to add to netpollWaiters.
		This must never return an empty list with a non-zero delta.
		
		delay < 0: blocks indefinitely
		delay == 0: does not block, just polls
		delay > 0: block for up to that many nanoseconds
	
		netpollAdjustWaiters adds delta to netpollWaiters.
	
		netpollAnyWaiters reports whether any goroutines are waiting for I/O.
	 func netpollarm(pd *pollDesc, mode int)	
		returns true if IO is ready, or false if timed out or closed
		waitio - wait only for completed IO, ignore errors
		Concurrent calls to netpollblock in the same mode are forbidden, as pollDesc
		can hold only a single waiting goroutine for each mode.
	
		netpollBreak interrupts an epollwait.
	 func netpollcheckerr(pd *pollDesc, mode int32) int	 func netpollDeadline(arg any, seq uintptr, delta int64)	 func netpolldeadlineimpl(pd *pollDesc, seq uintptr, read, write bool)	 func netpollgoready(gp *g, traceskip int)	 func netpollopen(fd uintptr, pd *pollDesc) uintptr	 func netpollReadDeadline(arg any, seq uintptr, delta int64)	
		netpollready is called by the platform-specific netpoll function.
		It declares that the fd associated with pd is ready for I/O.
		The toRun argument is used to build a list of goroutines to return
		from netpoll. The mode argument is 'r', 'w', or 'r'+'w' to indicate
		whether the fd is ready for reading or writing or both.
		
		This returns a delta to apply to netpollWaiters.
		
		This may run while the world is stopped, so write barriers are not allowed.
	
		netpollunblock moves either pd.rg (if mode == 'r') or
		pd.wg (if mode == 'w') into the pdReady state.
		This returns any goroutine blocked on pd.{rg,wg}.
		It adds any adjustment to netpollWaiters to *delta;
		this adjustment should be applied after the goroutine has
		been marked ready.
	 func netpollWriteDeadline(arg any, seq uintptr, delta int64)	
		newAllocBits returns a pointer to 8 byte aligned bytes
		to be used for this span's alloc bits.
		newAllocBits is used to provide newly initialized spans
		allocation bits. For spans not being initialized the
		mark bits are repurposed as allocation bits when
		the span is swept.
	
		newArenaMayUnlock allocates and zeroes a gcBits arena.
		The caller must hold gcBitsArena.lock. This may temporarily release it.
	
		newarray allocates an array of n elements of type typ.
		
		newarray should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/RomiChan/protobuf
		  - github.com/segmentio/encoding
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		newBucket allocates a bucket with the given type and number of stack entries.
	
		newcoro creates a new coro containing a
		goroutine blocked waiting to run f
		and returns that coro.
	
		Allocate a Defer, usually using per-P pool.
		Each defer must be released with freedefer.  The defer is not
		added to any defer chain yet.
	
		newextram allocates m's and puts them on the extra list.
		It is called with a working local m, so that it can do things
		like call schedlock and allocate.
	
		newInlineUnwinder creates an inlineUnwinder initially set to the inner-most
		inlined frame at PC. PC should be a "call PC" (not a "return PC").
		
		This unwinder uses non-strict handling of PC because it's assumed this is
		only ever used for symbolic debugging. If things go really wrong, it'll just
		fall back to the outermost frame.
		
		newInlineUnwinder should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/phuslu/log
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		Create a new m. It will start off with a call to fn, or else the scheduler.
		fn needs to be static and not a heap allocated closure.
		May run with m.p==nil, so write barriers are not allowed.
		
		id is optional pre-allocated m ID. Omit by passing -1.
	
		newMarkBits returns a pointer to 8 byte aligned bytes
		to be used for a span's mark bits.
	
		implementation of new builtin
		compiler (both frontend and SSA backend) knows the signature
		of this function.
	
		May run with m.p==nil, so write barriers are not allowed.
	
		Version of newosproc that doesn't require a valid G.
	
		Create a new g running fn.
		Put it on the queue of g's waiting to run.
		The compiler turns a go statement into a call to this.
	
		Create a new g in state _Grunnable (or _Gwaiting if parked is true), starting at fn.
		callerpc is the address of the go statement that created this. The caller is responsible
		for adding the new g to the scheduler. If parked is true, waitreason must be non-zero.
	
		newProfBuf returns a new profiling buffer with room for
		a header of hdrsize words and a buffer of at least bufwords words.
	 func newSpecialsIter(span *mspan) specialsIter	
		Called from runtime·morestack when more stack is needed.
		Allocate larger stack and relocate to new stack.
		Stack growth is multiplicative, for constant amortized cost.
		
		g->atomicstatus will be Grunning or Gscanrunning upon entry.
		If the scheduler is trying to stop this g, then it will set preemptStop.
		
		This must be nowritebarrierrec because it can be called as part of
		stack growth from other nowritebarrierrec functions, but the
		compiler doesn't check this.
	
		newTimer allocates and returns a new time.Timer or time.Ticker (same layout)
		with the given parameters.
	
		newUserArena creates a new userArena ready to be used.
	
		newUserArenaChunk allocates a user arena chunk, which maps to a single
		heap arena and single span. Returns a pointer to the base of the chunk
		(this is really important: we need to keep the chunk alive) and the span.
	
		newWakeableSleep initializes a new wakeableSleep and returns it.
	
		nextFreeFast returns the next free object if one is quickly available.
		Otherwise it returns 0.
	
		nextMarkBitArenaEpoch establishes a new epoch for the arenas
		holding the mark bits. The arenas are named relative to the
		current GC cycle which is demarcated by the call to finishweep_m.
		
		All current spans have been swept.
		During that sweep each span allocated room for its gcmarkBits in
		gcBitsArenas.next block. gcBitsArenas.next becomes the gcBitsArenas.current
		where the GC will mark objects and after each span is swept these bits
		will be used to allocate objects.
		gcBitsArenas.current becomes gcBitsArenas.previous where the span's
		gcAllocBits live until all the spans have been swept during this GC cycle.
		The span's sweep extinguishes all the references to gcBitsArenas.previous
		by pointing gcAllocBits into the gcBitsArenas.current.
		The gcBitsArenas.previous is released to the gcBitsArenas.free list.
	
		nextSample returns the next sampling point for heap profiling. The goal is
		to sample allocations on average every MemProfileRate bytes, but with a
		completely random distribution over the allocation timeline; this
		corresponds to a Poisson process with parameter MemProfileRate. In Poisson
		processes, the distance between two samples follows the exponential
		distribution (exp(MemProfileRate)), so the best return value is a random
		number taken from an exponential distribution whose mean is MemProfileRate.
	
		nextSampleNoFP is similar to nextSample, but uses older,
		simpler code to avoid floating point.
	
		nextslicecap computes the next appropriate slice length.
	 func nilinterequal(p, q unsafe.Pointer) bool	
		nilinterhash should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/anacrolix/stm
		  - github.com/aristanetworks/goarista
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		noescape hides a pointer from escape analysis.  noescape is
		the identity function but escape analysis doesn't think the
		output depends on the input.  noescape is inlined and currently
		compiles down to zero instructions.
		USE CAREFULLY!
		
		noescape should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/gopkg
		  - github.com/ebitengine/purego
		  - github.com/hamba/avro/v2
		  - github.com/puzpuzpuz/xsync/v3
		  - github.com/songzhibin97/gkit
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		Type Parameters:
			T: any
		noEscapePtr hides a pointer from escape analysis. See noescape.
		USE CAREFULLY!
	 func nonblockingPipe() (r, w int32, errno int32)	
		This is called when we receive a signal when there is no signal stack.
		This can only happen if non-Go code calls sigaltstack to disable the
		signal stack.
	
		One-time notifications.
	 func notetsleep(n *note, ns int64) bool	
		May run with m.p==nil if called from notetsleep, so write barriers
		are not allowed.
	
		same as runtime·notetsleep, but called on user g (not g0)
		calls only nosplit functions between entersyscallblock/exitsyscall.
	 func notewakeup(n *note)	
		notifyListAdd adds the caller to a notify list such that it can receive
		notifications. The caller must eventually call notifyListWait to wait for
		such a notification, passing the returned ticket number.
	
		notifyListNotifyAll notifies all entries in the list.
	
		notifyListNotifyOne notifies one entry in the list.
	
		notifyListWait waits for a notification. If one has been sent since
		notifyListAdd was called, it returns immediately. Otherwise, it blocks.
	
		nsToSec takes a duration in nanoseconds and converts it to seconds as
		a float64.
	
		offAddrToLevelIndex converts an address in the offset address space
		to the index into summary[level] containing addr.
	
		oneNewExtraM allocates an m and puts it on the extra list.
	
		os_beforeExit is called from os.Exit(0).
	 func osPreemptExtEnter(mp *m)	 func osPreemptExtExit(mp *m)	
		osRelax is called by the scheduler when transitioning to and from
		all Ps being idle.
	 func osSetupTLS(mp *m)	
		osStackAlloc performs OS-specific initialization before s is used
		as stack memory.
	
		osStackFree undoes the effect of osStackAlloc before s is returned
		to the heap.
	
		packPallocSum takes a start, max, and end value and produces a pallocSum.
	
		pageIndexOf returns the arena, page index, and page mask for pointer p.
		The caller must ensure p is in the heap.
	
		Check to make sure we can really generate a panic. If the panic
		was generated from the runtime, or from inside malloc, then convert
		to a throw of msg.
		pc should be the program counter of the compiler-generated code that
		triggered this panic.
	
		Same as above, but calling from the runtime is allowed.
		
		Using this function is necessary for any panic that may be
		generated by runtime.sigpanic, since those are always called by the
		runtime.
	
		panicdottypeE is called when doing an e.(T) conversion and the conversion fails.
		have = the dynamic type we have.
		want = the static type we're trying to convert to.
		iface = the static type we're converting from.
	
		panicdottypeI is called when doing an i.(T) conversion and the conversion fails.
		Same args as panicdottypeE, but "have" is the dynamic itab we have.
	
		Implemented in assembly, as they take arguments in registers.
		Declared here to mark them as ABIInternal.
	 func panicIndexU(x uint, y int)	 func panicmemAddr(addr uintptr)	
		panicnildottype is called when doing an i.(T) conversion and the interface i is nil.
		want = the static type we're trying to convert to.
	 func panicrangestate(state int)	 func panicSlice3Acap(x int, y int)	 func panicSlice3AcapU(x uint, y int)	 func panicSlice3Alen(x int, y int)	 func panicSlice3AlenU(x uint, y int)	 func panicSlice3B(x int, y int)	 func panicSlice3BU(x uint, y int)	 func panicSlice3C(x int, y int)	 func panicSlice3CU(x uint, y int)	 func panicSliceAcap(x int, y int)	 func panicSliceAcapU(x uint, y int)	 func panicSliceAlen(x int, y int)	 func panicSliceAlenU(x uint, y int)	 func panicSliceB(x int, y int)	 func panicSliceBU(x uint, y int)	 func panicSliceConvert(x int, y int)	
		panicwrap generates a panic for a call to a wrapped value method
		with a nil pointer receiver.
		
		It is called from the generated wrapper code.
	
		park continuation on g0.
	
		parseByteCount parses a string that represents a count of bytes.
		
		s must match the following regular expression:
		
			^[0-9]+(([KMGT]i)?B)?$
		
		In other words, an integer byte count with an optional unit
		suffix. Acceptable suffixes include one of
		- KiB, MiB, GiB, TiB which represent binary IEC/ISO 80000 units, or
		- B, which just represents bytes.
		
		Returns an int64 because that's what its callers want and receive,
		but the result is always non-negative.
	
		parsegodebug parses the godebug string, updating variables listed in dbgvars.
		If seen == nil, this is startup time and we process the string left to right
		overwriting older settings with newer ones.
		If seen != nil, $GODEBUG has changed and we are doing an
		incremental update. To avoid flapping in the case where a value is
		set multiple times (perhaps in the default and the environment,
		or perhaps twice in the environment), we process the string right-to-left
		and only change values not already seen. After doing this for both
		the environment and the default settings, the caller must also call
		cleargodebug(seen) to reset any now-unset values back to their defaults.
	
		pause is only used on wasm.
	 func pcdatastart(f funcInfo, table uint32) uint32	
		Like pcdatavalue, but also return the start PC of this PCData value.
	
		Returns the PCData value, and the PC where this value starts.
	
		pcvalueCacheKey returns the outermost index in a pcvalueCache to use for targetpc.
		It must be very cheap to calculate.
		For now, align to goarch.PtrSize and reduce mod the number of entries.
		In practice, this appears to be fairly randomly and evenly distributed.
	
		Wrapper around sysAlloc that can allocate small chunks.
		There is no associated free operation.
		Intended for things like function/type/debug-related persistent data.
		If align is 0, uses default align (currently 8).
		The returned memory will be zeroed.
		sysStat must be non-nil.
		
		Consider marking persistentalloc'd types not in heap by embedding
		internal/runtime/sys.NotInHeap.
		
		nosplit because it is used during write barriers and must not be preempted.
	
		Must run on system stack because stack growth can (re)invoke it.
		See issue 9174.
	
		pidleget tries to get a p from the _Pidle list, acquiring ownership.
		
		sched.lock must be held.
		
		May run during STW, so write barriers are not allowed.
	
		pidlegetSpinning tries to get a p from the _Pidle list, acquiring ownership.
		This is called by spinning Ms (or callers than need a spinning M) that have
		found work. If no P is available, this must synchronized with non-spinning
		Ms that may be preparing to drop their P without discovering this work.
		
		sched.lock must be held.
		
		May run during STW, so write barriers are not allowed.
	
		pidleput puts p on the _Pidle list. now must be a relatively recent call
		to nanotime or zero. Returns now or the current time if now was zero.
		
		This releases ownership of p. Once sched.lock is released it is no longer
		safe to use p.
		
		sched.lock must be held.
		
		May run during STW, so write barriers are not allowed.
	
		only for tests
	 func pinnerGetPtr(i *any) unsafe.Pointer	 func plugin_lastmoduleinit() (path string, syms map[string]any, initTasks []*initTask, errstr string)	
		Returns GC type info for the pointer stored in ep for testing.
		If ep points to the stack, only static live information will be returned
		(i.e. not for objects which are only dynamically live stack objects).
	
		poll_runtime_isPollServerDescriptor reports whether fd is a
		descriptor being used by netpoll.
	 func poll_runtime_pollOpen(fd uintptr) (*pollDesc, int)	
		poll_runtime_pollReset, which is internal/poll.runtime_pollReset,
		prepares a descriptor for polling in mode, which is 'r' or 'w'.
		This returns an error code; the codes are defined above.
	 func poll_runtime_pollSetDeadline(pd *pollDesc, d int64, mode int)	
		poll_runtime_pollWait, which is internal/poll.runtime_pollWait,
		waits for a descriptor to be ready for reading or writing,
		according to mode, which is 'r' or 'w'.
		This returns an error code; the codes are defined above.
	 func poll_runtime_pollWaitCanceled(pd *pollDesc, mode int)	 func poll_runtime_Semacquire(addr *uint32)	 func poll_runtime_Semrelease(addr *uint32)	
		pollFractionalWorkerExit reports whether a fractional mark worker
		should self-preempt. It assumes it is called from the fractional
		worker.
	
		pollWork reports whether there is non-background work this P could
		be doing. This is a fairly lightweight check to be used for
		background work loops, like idle GC. It checks a subset of the
		conditions checked by the actual scheduler.
	
		popDefer pops the head of gp's defer list and frees it.
	 func pprof_blockProfileInternal(p []profilerecord.BlockProfileRecord) (n int, ok bool)	
		runtime/pprof.runtime_cyclesPerSecond should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/grafana/pyroscope-go/godeltaprof
		  - github.com/pyroscope-io/godeltaprof
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	 func pprof_fpunwindExpand(dst, src []uintptr) int	 func pprof_goroutineProfileWithLabels(p []profilerecord.StackRecord, labels []unsafe.Pointer) (n int, ok bool)	 func pprof_memProfileInternal(p []profilerecord.MemProfileRecord, inuseZero bool) (n int, ok bool)	 func pprof_mutexProfileInternal(p []profilerecord.BlockProfileRecord) (n int, ok bool)	 func pprof_threadCreateInternal(p []profilerecord.StackRecord) (n int, ok bool)	
		Tell all goroutines that they have been preempted and they should stop.
		This function is purely best-effort. It can fail to inform a goroutine if a
		processor just started running it.
		No locks need to be held.
		Returns true if preemption request was issued to at least one goroutine.
	
		preemptM sends a preemption request to mp. This request may be
		handled asynchronously and may be coalesced with other requests to
		the M. When the request is received, if the running G or P are
		marked for preemption and the goroutine is at an asynchronous
		safe-point, it will preempt the goroutine. It always atomically
		increments mp.preemptGen after handling a preemption request.
	
		Tell the goroutine running on processor P to stop.
		This function is purely best-effort. It can incorrectly fail to inform the
		goroutine. It can inform the wrong goroutine. Even if it informs the
		correct goroutine, that goroutine might ignore the request if it is
		simultaneously executing newstack.
		No lock needs to be held.
		Returns true if preemption request was issued.
		The actual preemption will happen at some point in the future
		and will be indicated by the gp->status no longer being
		Grunning
	
		preemptPark parks gp and puts it in _Gpreempted.
	
		prepareFreeWorkbufs moves busy workbuf spans to free list so they
		can be freed to the heap. This must only be called when all
		workbufs are on the empty list.
	
		Call all Error and String methods before freezing the world.
		Used when crashing with panicking.
	
		printAncestorTraceback prints the traceback of the given ancestor.
		TODO: Unify this with gentraceback and CallersFrames.
	
		printAncestorTracebackFuncInfo prints the given function info at a given pc
		within an ancestor traceback. The precision of this info is reduced
		due to only have access to the pcs at the time of the caller
		goroutine being created.
	
		Invariant: each newline in the string representation is followed by a tab.
	
		printArgs prints function arguments in traceback.
	
		printCgoTraceback prints a traceback of callers.
	 func printcreatedby(gp *g)	 func printcreatedby1(f funcInfo, pc uintptr, goid uint64)	
		printDebugLog prints the debug log.
	
		printDebugLogPC prints a single symbolized PC. If returnPC is true,
		pc is a return PC that must first be converted to a call PC.
	 func printeface(e eface)	
		printFuncName prints a function name. name is the function name in
		the binary's func data table.
	 func printiface(i iface)	
		printindented prints s, replacing "\n" with "\n\t".
	
		printOneCgoTraceback prints the traceback of a single cgo caller.
		This can print more than one line because of inlining.
		It returns the "stop" result of commitFrame.
	
		Print all currently active panics. Used when crashing.
		Should only be called after preprintpanics.
	
		printpanicval prints an argument passed to panic.
		If panic is called with a value that has a String or Error method,
		it has already been converted into a string by preprintpanics.
		
		To ensure that the traceback can be unambiguously parsed even when
		the panic value contains "\ngoroutine" and other stack-like
		strings, newlines in the string representation of v are replaced by
		"\n\t".
	
		printScavTrace prints a scavenge trace line to standard error.
		
		released should be the amount of memory released since the last time this
		was called, and forced indicates whether the scavenge was forced by the
		application.
		
		scavenger.lock must be held.
	 func printslice(s []byte)	
		procPin should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/gopkg
		  - github.com/choleraehyq/pid
		  - github.com/songzhibin97/gkit
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		Change number of processors.
		
		sched.lock must be held, and the world must be stopped.
		
		gcworkbufs must not be being modified by either the GC or the write barrier
		code, so the GC must not be running if the number of Ps actually changes.
		
		Returns list of Ps with local work, they need to be scheduled by the caller.
	
		procUnpin should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/gopkg
		  - github.com/choleraehyq/pid
		  - github.com/songzhibin97/gkit
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		procyield should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/sagernet/sing-tun
		  - github.com/slackhq/nebula
		  - golang.zx2c4.com/wireguard
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		profilealloc resets the current mcache's nextSample counter and
		records a memory profile sample.
		
		The caller must be non-preemptible and have a P.
	
		progToPointerMask returns the 1-bit pointer mask output by the GC program prog.
		size the size of the region described by prog, in bytes.
		The resulting bitvector will have no more than size/goarch.PtrSize bits.
	
		publicationBarrier performs a store/store barrier (a "publication"
		or "export" barrier). Some form of synchronization is required
		between initializing an object and making that object accessible to
		another processor. Without synchronization, the initialization
		writes and the "publication" write may be reordered, allowing the
		other processor to follow the pointer and observe an uninitialized
		object. In general, higher-level synchronization should be used,
		such as locking or an atomic pointer write. publicationBarrier is
		for when those aren't an option, such as in the implementation of
		the memory manager.
		
		There's no corresponding barrier for the read side because the read
		side naturally has a data dependency order. All architectures that
		Go supports or seems likely to ever support automatically enforce
		data dependency ordering.
	
		putempty puts a workbuf onto the work.empty list.
		Upon entry this goroutine owns b. The lfstack.push relinquishes ownership.
	
		Returns an extra M back to the list. mp must be from getExtraM. Newly
		allocated M's should use addExtraM.
	
		putfull puts the workbuf on the work.full list for the GC.
		putfull accepts partially full buffers so the GC can avoid competing
		with the mutators for ownership of partially full buffers.
	 func raceacquire(addr unsafe.Pointer)	 func raceacquirectx(racectx uintptr, addr unsafe.Pointer)	 func raceacquireg(gp *g, addr unsafe.Pointer)	 func racectxend(racectx uintptr)	 func racemalloc(p unsafe.Pointer, sz uintptr)	 func racemapshadow(addr unsafe.Pointer, size uintptr)	
		Notify the race detector of a send or receive involving buffer entry idx
		and a channel c or its communicating partner sg.
		This function handles the special case of c.elemsize==0.
	 func raceprocdestroy(ctx uintptr)	 func racereadpc(addr unsafe.Pointer, callerpc, pc uintptr)	 func racereadrangepc(addr unsafe.Pointer, sz, callerpc, pc uintptr)	 func racerelease(addr unsafe.Pointer)	 func racereleaseacquire(addr unsafe.Pointer)	 func racereleaseacquireg(gp *g, addr unsafe.Pointer)	 func racereleaseg(gp *g, addr unsafe.Pointer)	 func racereleasemerge(addr unsafe.Pointer)	 func racereleasemergeg(gp *g, addr unsafe.Pointer)	 func racewritepc(addr unsafe.Pointer, callerpc, pc uintptr)	 func racewriterangepc(addr unsafe.Pointer, sz, callerpc, pc uintptr)	
		raisebadsignal is called when a signal is received on a non-Go
		thread, and the Go program does not want to handle it (that is, the
		program has not called os/signal.Notify for the signal).
	
		rand returns a random uint64 from the per-m chacha8 state.
		This is called from compiler-generated code.
		
		Do not change signature: used via linkname from other packages.
	
		rand32 is uint32(rand()), called from compiler-generated code.
	 func rand_fatal(s string)	
		randinit initializes the global random state.
		It must be called before any use of grand.
	
		randn is like rand() % n but faster.
		Do not change signature: used via linkname from other packages.
	
		rawbyteslice allocates a new byte slice. The byte slice is not zeroed.
	
		rawruneslice allocates a new rune slice. The rune slice is not zeroed.
	
		rawstring allocates storage for a new string. The returned
		string and byte slice both refer to the same storage.
		The storage is not zeroed. Callers should use
		b to set the string contents and then drop b.
	
		read calls the read system call.
		It returns a non-negative number of bytes written or a negative errno value.
	 func readGCStats(pauses *[]uint64)	
		readGCStats_m must be called on the system stack because it acquires the heap
		lock. See mheap for details.
	
		All reads and writes of g's status go through readgstatus, casgstatus
		castogscanstatus, casfrom_Gscanstatus.
	
		readmemstats_m populates stats for internal runtime values.
		
		The world must be stopped.
	
		readMetricNames is the implementation of runtime/metrics.readMetricNames,
		used by the runtime/metrics test and otherwise unreferenced.
	
		readMetrics is the implementation of runtime/metrics.Read.
	
		readMetricsLocked is the internal, locked portion of readMetrics.
		
		Broken out for more robust testing. metricsLock must be held and
		initMetrics must have been called already.
	 func readRandom(r []byte) int	
		readTimeRandom stretches any entropy in the current time
		into entropy the length of r and XORs it into r.
		This is a fallback for when readRandom does not read
		the full requested amount.
		Whatever entropy r already contained is preserved.
	
		readTrace0 is ReadTrace's continuation on g0. This must run on the
		system stack because it acquires trace.lock.
	
		Read the bytes starting at the aligned pointer p into a uintptr.
		Read is little-endian.
	
		Note: These routines perform the read with a native endianness.
	
		readvarint reads a varint from p.
	
		readvarintUnsafe reads the uint32 in varint format starting at fd, and returns the
		uint32 and a pointer to the byte following the varint.
		
		The implementation is the same with runtime.readvarint, except that this function
		uses unsafe.Pointer for speed.
	
		Mark gp ready to run.
	 func readyWithTime(s *sudog, traceskip int)	
		recordForPanic maintains a circular buffer of messages written by the
		runtime leading up to a process crash, allowing the messages to be
		extracted from a core dump.
		
		The text written during a process crash (following "panic" or "fatal
		error") is not saved, since the goroutine stacks will generally be readable
		from the runtime data structures in the core file.
	
		recordspan adds a newly allocated span to h.allspans.
		
		This only happens the first time a span is allocated from
		mheap.spanalloc (it is not called when a span is reused).
		
		Write barriers are disallowed here because it can be called from
		gcWork when allocating new workbufs. However, because it's an
		indirect call from the fixalloc initializer, the compiler can't see
		this.
		
		The heap lock must be held.
	
		Unwind the stack after a deferred function calls recover
		after a panic. Then arrange to continue running as though
		the caller of the deferred function returned normally.
		
		However, if unwinding the stack would skip over a Goexit call, we
		return into the Goexit loop instead, so it can continue processing
		defers instead.
	
		recv processes a receive operation on a full channel c.
		There are 2 parts:
		 1. The value sent by the sender sg is put into the channel
		    and the sender is woken up to go on its merry way.
		 2. The value received by the receiver (the current G) is
		    written to ep.
		
		For synchronous channels, both values are the same.
		For asynchronous channels, the receiver gets its data from
		the channel buffer and the sender's data is put in the
		channel buffer.
		Channel c must be full and locked. recv unlocks c with unlockf.
		sg must already be dequeued from c.
		A non-nil ep must point to the heap or the caller's stack.
	
		redZoneSize computes the size of the redzone for a given allocation.
		Refer to the implementation of the compiler-rt.
	
		The goroutine g is about to enter a system call.
		Record that it's not using the cpu anymore.
		This is called only from the go syscall library and cgocall,
		not from the low-level system calls used by the runtime.
		
		Entersyscall cannot split the stack: the save must
		make g->sched refer to the caller's stack segment, because
		entersyscall is going to return immediately after.
		
		Nothing entersyscall calls can split the stack either.
		We cannot safely move the stack during an active call to syscall,
		because we do not know which of the uintptr arguments are
		really pointers (back into the stack).
		In practice, this means that we make the fast path run through
		entersyscall doing no-split things, and the slow path has to use systemstack
		to run bigger things on the system stack.
		
		reentersyscall is the entry point used by cgo callbacks, where explicitly
		saved SP and PC are restored. This is needed when exitsyscall will be called
		from a function further up in the call stack than the parent, as g->syscallsp
		must always point to a valid stack frame. entersyscall below is the normal
		entry point for syscalls, which obtains the SP and PC from the caller.
	
		reflect_addReflectOff adds a pointer to the reflection offset lookup map.
	 func reflect_chancap(c *hchan) int	 func reflect_chanlen(c *hchan) int	
		reflect_gcbits returns the GC type info for x, for testing.
		The result is the bitmap entries (0 or 1), one entry per byte.
	
		reflect_growslice should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/cloudwego/dynamicgo
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_ifaceE2I is for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gitee.com/quant1x/gox
		  - github.com/modern-go/reflect2
		  - github.com/v2pro/plz
		
		Do not remove or change the type signature.
	 func reflect_makechan(t *chantype, size int) *hchan	
		reflect_makemap is for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gitee.com/quant1x/gox
		  - github.com/modern-go/reflect2
		  - github.com/goccy/go-json
		  - github.com/RomiChan/protobuf
		  - github.com/segmentio/encoding
		  - github.com/v2pro/plz
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_mapaccess is for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gitee.com/quant1x/gox
		  - github.com/modern-go/reflect2
		  - github.com/v2pro/plz
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	 func reflect_mapaccess_faststr(t *abi.SwissMapType, m *maps.Map, key string) unsafe.Pointer	
		reflect_mapassign is for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gitee.com/quant1x/gox
		  - github.com/v2pro/plz
		
		Do not remove or change the type signature.
	 func reflect_mapassign_faststr(t *abi.SwissMapType, m *maps.Map, key string, elem unsafe.Pointer)	 func reflect_mapclear(t *abi.SwissMapType, m *maps.Map)	 func reflect_mapdelete(t *abi.SwissMapType, m *maps.Map, key unsafe.Pointer)	 func reflect_mapdelete_faststr(t *abi.SwissMapType, m *maps.Map, key string)	
		reflect_mapiterelem is a compatibility wrapper for map iterator for users of
		//go:linkname from before Go 1.24. It is not used by Go itself. New users
		should use reflect or the maps package.
		
		reflect_mapiterelem should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/goccy/go-json
		  - gonum.org/v1/gonum
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_mapiterinit is a compatibility wrapper for map iterator for users of
		//go:linkname from before Go 1.24. It is not used by Go itself. New users
		should use reflect or the maps package.
		
		reflect_mapiterinit should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/modern-go/reflect2
		  - gitee.com/quant1x/gox
		  - github.com/v2pro/plz
		  - github.com/wI2L/jettison
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_mapiterkey is a compatibility wrapper for map iterator for users of
		//go:linkname from before Go 1.24. It is not used by Go itself. New users
		should use reflect or the maps package.
		
		reflect_mapiterkey should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/goccy/go-json
		  - gonum.org/v1/gonum
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_mapiternext is a compatibility wrapper for map iterator for users of
		//go:linkname from before Go 1.24. It is not used by Go itself. New users
		should use reflect or the maps package.
		
		reflect_mapiternext is for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gitee.com/quant1x/gox
		  - github.com/modern-go/reflect2
		  - github.com/goccy/go-json
		  - github.com/v2pro/plz
		  - github.com/wI2L/jettison
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_maplen is for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/goccy/go-json
		  - github.com/wI2L/jettison
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	 func reflect_memmove(to, from unsafe.Pointer, n uintptr)	
		reflect_resolveNameOff resolves a name offset from a base pointer.
		
		reflect_resolveNameOff is for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/agiledragon/gomonkey/v2
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_resolveTextOff resolves a function pointer offset from a base type.
		
		reflect_resolveTextOff is for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/agiledragon/gomonkey/v2
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_resolveTypeOff resolves an *rtype offset from a base type.
		
		reflect_resolveTypeOff is meant for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gitee.com/quant1x/gox
		  - github.com/modern-go/reflect2
		  - github.com/v2pro/plz
		  - github.com/timandy/routine
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	 func reflect_rselect(cases []runtimeSelect) (int, bool)	
		reflect_typedmemclr is meant for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_typedmemmove is meant for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gitee.com/quant1x/gox
		  - github.com/goccy/json
		  - github.com/modern-go/reflect2
		  - github.com/ugorji/go/codec
		  - github.com/v2pro/plz
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_typedslicecopy is meant for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gitee.com/quant1x/gox
		  - github.com/modern-go/reflect2
		  - github.com/RomiChan/protobuf
		  - github.com/segmentio/encoding
		  - github.com/v2pro/plz
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_typelinks is meant for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gitee.com/quant1x/gox
		  - github.com/goccy/json
		  - github.com/modern-go/reflect2
		  - github.com/vmware/govmomi
		  - github.com/pinpoint-apm/pinpoint-go-agent
		  - github.com/timandy/routine
		  - github.com/v2pro/plz
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_unsafe_New is meant for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gitee.com/quant1x/gox
		  - github.com/goccy/json
		  - github.com/modern-go/reflect2
		  - github.com/v2pro/plz
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_unsafe_NewArray is meant for package reflect,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gitee.com/quant1x/gox
		  - github.com/bytedance/sonic
		  - github.com/goccy/json
		  - github.com/modern-go/reflect2
		  - github.com/segmentio/encoding
		  - github.com/segmentio/kafka-go
		  - github.com/v2pro/plz
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		reflect_verifyNotInHeapPtr reports whether converting the not-in-heap pointer into a unsafe.Pointer is ok.
	
		reflectcall calls fn with arguments described by stackArgs, stackArgsSize,
		frameSize, and regArgs.
		
		Arguments passed on the stack and space for return values passed on the stack
		must be laid out at the space pointed to by stackArgs (with total length
		stackArgsSize) according to the ABI.
		
		stackRetOffset must be some value <= stackArgsSize that indicates the
		offset within stackArgs where the return value space begins.
		
		frameSize is the total size of the argument frame at stackArgs and must
		therefore be >= stackArgsSize. It must include additional space for spilling
		register arguments for stack growth and preemption.
		
		TODO(mknyszek): Once we don't need the additional spill space, remove frameSize,
		since frameSize will be redundant with stackArgsSize.
		
		Arguments passed in registers must be laid out in regArgs according to the ABI.
		regArgs will hold any return values passed in registers after the call.
		
		reflectcall copies stack arguments from stackArgs to the goroutine stack, and
		then copies back stackArgsSize-stackRetOffset bytes back to the return space
		in stackArgs once fn has completed. It also "unspills" argument registers from
		regArgs before calling fn, and spills them back into regArgs immediately
		following the call to fn. If there are results being returned on the stack,
		the caller should pass the argument frame type as stackArgsType so that
		reflectcall can execute appropriate write barriers during the copy.
		
		reflectcall expects regArgs.ReturnIsPtr to be populated indicating which
		registers on the return path will contain Go pointers. It will then store
		these pointers in regArgs.Ptrs such that they are visible to the GC.
		
		Package reflect passes a frame type. In package runtime, there is only
		one call that copies results back, in callbackWrap in syscall_windows.go, and it
		does NOT pass a frame type, meaning there are no write barriers invoked. See that
		call site for justification.
		
		Package reflect accesses this symbol through a linkname.
		
		Arguments passed through to reflectcall do not escape. The type is used
		only in a very limited callee of reflectcall, the stackArgs are copied, and
		regArgs is only used in the reflectcall frame.
	
		reflectcallmove is invoked by reflectcall to copy the return values
		out of the stack and into the heap, invoking the necessary write
		barriers. dst, src, and size describe the return value area to
		copy. typ describes the entire frame (not just the return values).
		typ may be nil, which indicates write barriers are not needed.
		
		It must be nosplit and must only call nosplit functions because the
		stack map of reflectcall is wrong.
	 func reflectlite_ifaceE2I(inter *interfacetype, e eface, dst *iface)	 func reflectlite_maplen(m *maps.Map) int	
		reflectlite_resolveNameOff resolves a name offset from a base pointer.
	
		reflectlite_resolveTypeOff resolves an *rtype offset from a base type.
	 func reflectlite_typedmemmove(typ *_type, dst, src unsafe.Pointer)	
		This function may be called in nosplit context and thus must be nosplit.
	
		Disassociate p and the current m.
	
		Disassociate p and the current m without tracing an event.
	 func releaseSudog(s *sudog)	
		Removes the finalizer (if any) from the object p.
	
		Removes the Special record of the given kind for the object p.
		Returns the record if the record existed, nil otherwise.
		The caller must FixAlloc_Free the result.
	
		reparsedebugvars reparses the runtime's debug variables
		because the environment variable has been changed to env.
	
		resetForSleep is called after the goroutine is parked for timeSleep.
		We can't call timer.reset in timeSleep itself because if this is a short
		sleep and there are many goroutines then the P can wind up running the
		timer function, goroutineReady, before the goroutine has been parked.
	
		resetTimer resets an inactive timer, adding it to the timer heap.
		
		Reports whether the timer was modified before it was run.
	
		restoreGsignalStack restores the gsignal stack to the value it had
		before entering the signal handler.
	
		resumeG undoes the effects of suspendG, allowing the suspended
		goroutine to continue from its current safe-point.
	
		Retpolines, used by -spectre=ret flag in cmd/asm, cmd/compile.
	
		retryOnEAGAIN retries a function until it does not return EAGAIN.
		It will use an increasing delay between calls, and retry up to 20 times.
		The function argument is expected to return an errno value,
		and retryOnEAGAIN will return any errno value other than EAGAIN.
		If all retries return EAGAIN, then retryOnEAGAIN will return EAGAIN.
	
		return0 is a stub used to return 0 from deferproc.
		It is called at the very end of deferproc to signal
		the calling Go function that it should not jump
		to deferreturn.
		in asm_*.s
	
		round x up to a power of 2.
	
		Returns size of the memory block that mallocgc will allocate if you ask for the size,
		minus any inline space for metadata.
	
		rt_sigaction is implemented in assembly.
	 func rtsigprocmask(how int32, new, old *sigset, size int32)	 func runExitHooks(code int)	
		This is the goroutine that runs all of the finalizers and cleanups.
	
		runGCProg returns the number of 1-bit entries written to memory.
	
		runPerThreadSyscall runs perThreadSyscall for this M if required.
		
		This function throws if the system call returns with anything other than the
		expected values.
	
		runqdrain drains the local runnable queue of pp and returns all goroutines in it.
		Executed only by the owner P.
	
		runqempty reports whether pp has no Gs on its local run queue.
		It never returns true spuriously.
	
		Get g from local runnable queue.
		If inheritTime is true, gp should inherit the remaining time in the
		current time slice. Otherwise, it should start a new time slice.
		Executed only by the owner P.
	
		Grabs a batch of goroutines from pp's runnable queue into batch.
		Batch is a ring buffer starting at batchHead.
		Returns number of grabbed goroutines.
		Can be executed by any P.
	
		runqput tries to put g on the local runnable queue.
		If next is false, runqput adds g to the tail of the runnable queue.
		If next is true, runqput puts g in the pp.runnext slot.
		If the run queue is full, runnext puts g on the global queue.
		Executed only by the owner P.
	
		runqputbatch tries to put all the G's on q on the local runnable queue.
		If the queue is full, they are put on the global queue; in that case
		this will temporarily acquire the scheduler lock.
		Executed only by the owner P.
	
		Put g and a batch of work from local runnable queue on global queue.
		Executed only by the owner P.
	
		Steal half of elements from local runnable queue of p2
		and put onto local runnable queue of p.
		Returns one of the stolen elements (or nil if failed).
	
		runSafePointFn runs the safe point function, if any, for this P.
		This should be called like
		
			if getg().m.p.runSafePointFn != 0 {
			    runSafePointFn()
			}
		
		runSafePointFn must be checked on any transition in to _Pidle or
		_Psyscall to avoid a race where forEachP sees that the P is running
		just before the P goes into _Pidle/_Psyscall and neither forEachP
		nor the P run the safe-point function.
	
		runtime_expandFinalInlineFrame expands the final pc in stk to include all
		"callers" if pc is inline.
		
		runtime_expandFinalInlineFrame should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/grafana/pyroscope-go/godeltaprof
		  - github.com/pyroscope-io/godeltaprof
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		runtime_FrameStartLine returns the start line of the function in a Frame.
		
		runtime_FrameStartLine should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/grafana/pyroscope-go/godeltaprof
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		runtime_FrameSymbolName returns the full symbol name of the function in a Frame.
		For generic functions this differs from f.Function in that this doesn't replace
		the shape name to "...".
		
		runtime_FrameSymbolName should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/grafana/pyroscope-go/godeltaprof
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		runtime_getProfLabel should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/cloudwego/localsession
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		readProfile, provided to runtime/pprof, returns the next chunk of
		binary CPU profiling stack trace data, blocking until data is available.
		If profiling is turned off and all the profile data accumulated while it was
		on has been returned, readProfile returns eof=true.
		The caller must save the returned data and tags before calling readProfile again.
		The returned data contains a whole number of records, and tags contains
		exactly one entry per record.
		
		runtime_pprof_readProfile should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/pyroscope-io/pyroscope
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		runtime_setProfLabel should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/cloudwego/localsession
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		save updates getg().sched to refer to pc and sp so that a following
		gogo will restore pc and sp.
		
		save must not have write barriers because invoking a write barrier
		can clobber getg().sched.
	
		saveAncestors copies previous ancestors of the given caller g and
		includes info for the current caller into a new set of tracebacks for
		a g being created.
	
		saveblockevent records a profile event of the type specified by which.
		cycles is the quantity associated with this event and rate is the sampling rate,
		used to adjust the cycles value in the manner determined by the profile type.
		skip is the number of frames to omit from the traceback associated with the event.
		The traceback will be recorded from the stack of the goroutine associated with the current m.
		skip should be positive if this event is recorded from the current stack
		(e.g. when this is not called from a system stack)
	 func saveBlockEventStack(cycles, rate int64, stk []uintptr, which bucketType)	 func saveg(pc, sp uintptr, gp *g, r *profilerecord.StackRecord, pcbuf []uintptr)	
		scanblock scans b as scanobject would, but using an explicit
		pointer bitmap instead of the heap bitmap.
		
		This is used to scan non-heap roots, so it does not update
		gcw.bytesMarked or gcw.heapScanWork.
		
		If stk != nil, possible stack pointers are also reported to stk.putPtr.
	
		scanConservative scans block [b, b+n) conservatively, treating any
		pointer-like value in the block as a pointer.
		
		If ptrmask != nil, only words that are marked in ptrmask are
		considered as potential pointers.
		
		If state != nil, it's assumed that [b, b+n) is a block in the stack
		and may contain pointers to stack objects.
	
		Scan a stack frame: local variables and function arguments/results.
	
		scanobject scans the object starting at b, adding pointers to gcw.
		b must point to the beginning of a heap object or an oblet.
		scanobject consults the GC bitmap for the pointer mask and the
		spans for the size of the object.
	
		scanstack scans gp's stack, greying all pointers found on the stack.
		
		Returns the amount of scan work performed, but doesn't update
		gcController.stackScanWork or flush any credit. Any background credit produced
		by this function should be flushed by its caller. scanstack itself can't
		safely flush because it may result in trying to wake up a goroutine that
		was just scanned, resulting in a self-deadlock.
		
		scanstack will also shrink the stack if it is safe to do so. If it
		is not, it schedules a stack shrink for the next synchronous safe
		point.
		
		scanstack is marked go:systemstack because it must not be preempted
		while using a workbuf.
	 func sched_getaffinity(pid, len uintptr, buf *byte) int32	
		schedEnabled reports whether gp should be scheduled. It returns
		false is scheduling of gp is disabled.
		
		sched.lock must be held.
	
		schedEnableUser enables or disables the scheduling of user
		goroutines.
		
		This does not stop already running user goroutines, so the caller
		should first stop the world when disabling user goroutines.
	
		The bootstrap sequence is:
		
			call osinit
			call schedinit
			make & queue new G
			call runtime·mstart
		
		The new G calls runtime·main.
	 func schedtrace(detailed bool)	
		One round of scheduler: find a runnable goroutine and execute it.
		Never returns.
	
		selectgo implements the select statement.
		
		cas0 points to an array of type [ncases]scase, and order0 points to
		an array of type [2*ncases]uint16 where ncases must be <= 65536.
		Both reside on the goroutine's stack (regardless of any escaping in
		selectgo).
		
		For race detector builds, pc0 points to an array of type
		[ncases]uintptr (also on the stack); for other builds, it's set to
		nil.
		
		selectgo returns the index of the chosen scase, which matches the
		ordinal position of its respective select{recv,send,default} call.
		Also, if the chosen scase was a receive operation, it reports whether
		a value was received.
	
		compiler implements
		
			select {
			case v, ok = <-c:
				... foo
			default:
				... bar
			}
		
		as
		
			if selected, ok = selectnbrecv(&v, c); selected {
				... foo
			} else {
				... bar
			}
	
		compiler implements
		
			select {
			case c <- v:
				... foo
			default:
				... bar
			}
		
		as
		
			if selectnbsend(c, v) {
				... foo
			} else {
				... bar
			}
	 func selectsetpc(pc *uintptr)	
		Called from runtime.
	 func semacquire1(addr *uint32, lifo bool, profile semaProfileFlags, skipframes int, reason waitReason)	 func semacreate(mp *m)	 func semawakeup(mp *m)	 func semrelease(addr *uint32)	 func semrelease1(addr *uint32, handoff bool, skipframes int)	
		send processes a send operation on an empty channel c.
		The value ep sent by the sender is copied to the receiver sg.
		The receiver is then woken up to go on its merry way.
		Channel c must be empty and locked.  send unlocks c with unlockf.
		sg must already be dequeued from c.
		ep must be non-nil and point to the heap or the caller's stack.
	
		setCheckmark throws if marking object is a checkmarks violation,
		and otherwise sets obj's checkmark. It returns true if obj was
		already checkmarked.
	
		setcpuprofilerate sets the CPU profiling rate to hz times per second.
		If hz <= 0, setcpuprofilerate turns off CPU profiling.
	 func setCrashFD(fd uintptr) uintptr	
		Update the C environment if cgo is loaded.
	 func setGCPercent(in int32) (out int32)	 func setGCPhase(x uint32)	
		setGNoWB performs *gp = new without a write barrier.
		For times when it's impractical to use a guintptr.
	
		setGsignalStack sets the gsignal stack of the current m to an
		alternate signal stack returned from the sigaltstack system call.
		It saves the old values in *old for use by restoreGsignalStack.
		This is used when handling a signal if non-Go code has set the
		alternate signal stack.
	 func setMaxStack(in int) (out int)	 func setMaxThreads(in int) (out int)	 func setMemoryLimit(in int64) (out int64)	
		setMNoWB performs *mp = new without a write barrier.
		For times when it's impractical to use an muintptr.
	 func setPanicOnFault(new bool) (old bool)	
		setPinned marks or unmarks a Go pointer as pinned, when the ptr is a Go pointer.
		It will be ignored while try to pin a non-Go pointer,
		and it will be panic while try to unpin a non-Go pointer,
		which should not happen in normal usage.
	
		setProcessCPUProfilerTimer is called when the profiling timer changes.
		It is called with prof.signalLock held. hz is the new timer, and is 0 if
		profiling is being disabled. Enable or disable the signal as
		required for -buildmode=c-archive.
	
		Set the heap profile bucket associated with addr to b.
	
		setSignalstackSP sets the ss_sp field of a stackt.
	
		setsigsegv is used on darwin/arm64 to fake a segmentation fault.
		
		This is exported via linkname to assembly in runtime/cgo.
	
		setThreadCPUProfilerHz makes any thread-specific changes required to
		implement profiling at a rate of hz.
		No changes required on Unix systems when using setitimer.
	
		Called from assembly only; declared for go vet.
	 func setTraceback(level string)	
		Shade the object if it isn't already.
		The object is not nil and known to be in the heap.
		Preemption must be disabled.
	
		shouldPushSigpanic reports whether pc should be used as sigpanic's
		return PC (pushing a frame for the call). Otherwise, it should be
		left alone so that LR is used as sigpanic's return PC, effectively
		replacing the top-most frame with sigpanic. This is used by
		preparePanic.
	
		showframe reports whether the frame with the given characteristics should
		be printed during a traceback.
	
		showfuncinfo reports whether a function with the given characteristics should
		be printed during a traceback.
	
		Maybe shrink the stack being used by gp.
		
		gp must be stopped and we must own its stack. It may be in
		_Grunning, but only if this is our own user G.
	 func sigaction(sig uint32, new, old *sigactiont)	 func sigaltstack(new, old *stackt)	
		sigblock blocks signals in the current thread's signal mask.
		This is used to block signals while setting up and tearing down g
		when a non-Go thread calls a Go function. When a thread is exiting
		we use the sigsetAllExiting value, otherwise the OS specific
		definition of sigset_all is used.
		This is nosplit and nowritebarrierrec because it is called by needm
		which may be called on a non-Go thread with no g available.
	
		sigdisable disables the Go signal handler for the signal sig.
		It is only called while holding the os/signal.handlers lock,
		via os/signal.disableSignal and signal_disable.
	
		sigenable enables the Go signal handler to catch the signal sig.
		It is only called while holding the os/signal.handlers lock,
		via os/signal.enableSignal and signal_enable.
	
		sigFetchG fetches the value of G safely when running in a signal handler.
		On some architectures, the g value may be clobbered when running in a VDSO.
		See issue #32912.
	 func sigfillset(mask *uint64)	
		Determines if the signal should be handled by Go and if not, forwards the
		signal to the handler that was installed before Go's. Returns whether the
		signal was forwarded.
		This is called by the signal handler, and the world may be stopped.
	
		sighandler is invoked when a signal occurs. The global g will be
		set to a gsignal goroutine and we will be running on the alternate
		signal stack. The parameter gp will be the value of the global g
		when the signal occurred. The sig, info, and ctxt parameters are
		from the system signal handler: they are the parameters passed when
		the SA is passed to the sigaction system call.
		
		The garbage collector may have stopped the world, so write barriers
		are not allowed.
	
		sigignore ignores the signal sig.
		It is only called while holding the os/signal.handlers lock,
		via os/signal.ignoreSignal and signal_ignore.
	
		sigInitIgnored marks the signal as already ignored. This is called at
		program start by initsig. In a shared library initsig is called by
		libpreinit, so the runtime may not be initialized yet.
	
		Must only be called from a single goroutine at a time.
	
		Must only be called from a single goroutine at a time.
	
		Must only be called from a single goroutine at a time.
	
		Checked by signal handlers.
	
		Called to receive the next queued signal.
		Must only be called from a single goroutine at a time.
	
		signalDuringFork is called if we receive a signal while doing a fork.
		We do not want signals at that time, as a signal sent to the process
		group may be delivered to the child process, causing confusion.
		This should never be called, because we block signals across the fork;
		this function is just a safety check. See issue 18600 for background.
	
		signalM sends a signal to mp.
	
		signalstack sets the current thread's alternate signal stack to s.
	
		signalWaitUntilIdle waits until the signal delivery mechanism is idle.
		This is used to ensure that we do not drop a signal notification due
		to a race between disabling a signal and receiving a signal.
		This assumes that signal delivery has already been disabled for
		the signal(s) in question, and here we are just waiting to make sure
		that all the signals have been delivered to the user channels
		by the os/signal package.
	
		This is called if we receive a signal when there is a signal stack
		but we are not on it. This can only happen if non-Go code called
		sigaction without setting the SS_ONSTACK flag.
	
		sigpanic turns a synchronous signal into a run-time panic.
		If the signal handler sees a synchronous panic, it arranges the
		stack to look like the function where the signal occurred called
		sigpanic, sets the signal's PC value to sigpanic, and returns from
		the signal handler. The effect is that the program will act as
		though the function that got the signal simply called sigpanic
		instead.
		
		This must NOT be nosplit because the linker doesn't know where
		sigpanic calls can be injected.
		
		The signal handler must not inject a call to sigpanic if
		getg().throwsplit, since sigpanic may need to grow the stack.
		
		This is exported via linkname to assembly in runtime/cgo.
	
		Injected by the signal handler for panicking signals.
		Initializes any registers that have fixed meaning at calls but
		are scratch in bodies and calls sigpanic.
		On many platforms it just jumps to sigpanic.
	 func sigprocmask(how int32, new, old *sigset)	
		Called if we receive a SIGPROF signal.
		Called by the signal handler, may run during STW.
	
		sigprofNonGo is called if we receive a SIGPROF signal on a non-Go thread,
		and the signal handler collected a stack trace in sigprofCallers.
		When this is called, sigprofCallersUse will be non-zero.
		g is nil, and what we can do is very limited.
		
		It is called from the signal handling functions written in assembly code that
		are active for cgo programs, cgoSigtramp and sigprofNonGoWrapper, which have
		not verified that the SIGPROF delivery corresponds to the best available
		profiling source for this thread.
	
		sigprofNonGoPC is called when a profiling signal arrived on a
		non-Go thread and we have a single PC value, not a stack trace.
		g is nil, and what we can do is very limited.
	
		sigsave saves the current thread's signal mask into *p.
		This is used to preserve the non-Go signal mask when a non-Go
		thread calls a Go function.
		This is nosplit and nowritebarrierrec because it is called by needm
		which may be called on a non-Go thread with no g available.
	
		sigsend delivers a signal from sighandler to the internal signal delivery queue.
		It reports whether the signal was sent. If not, the caller typically crashes the program.
		It runs from the signal handler, so it's limited in what it can do.
	
		sigtrampgo is called from the signal handler function, sigtramp,
		written in assembly code.
		This is called by the signal handler, and the world may be stopped.
		
		It must be nosplit because getg() is still the G that was running
		(if any) when the signal was delivered, but it's (usually) called
		on the gsignal stack. Until this switches the G to gsignal, the
		stack bounds check won't work.
	
		slicebytetostring converts a byte slice to a string.
		It is inserted by the compiler into generated code.
		ptr is a pointer to the first element of the slice;
		n is the length of the slice.
		Buf is a fixed-size buffer for the result,
		it is not nil if the result does not escape.
	
		slicebytetostringtmp returns a "string" referring to the actual []byte bytes.
		
		Callers need to ensure that the returned string will not be used after
		the calling goroutine modifies the original slice or synchronizes with
		another goroutine.
		
		The function is only called when instrumenting
		and otherwise intrinsified by the compiler.
		
		Some internal compiler optimizations use this function.
		  - Used for m[T1{... Tn{..., string(k), ...} ...}] and m[string(k)]
		    where k is []byte, T1 to Tn is a nesting of struct and array literals.
		  - Used for "<"+string(b)+">" concatenation where b is []byte.
		  - Used for string(b)=="foo" comparison where b is []byte.
	
		slicecopy is used to copy from a string or slice of pointerless elements into a slice.
	 func slicerunetostring(buf *tmpBuf, a []rune) string	
		spanHasNoSpecials marks a span as having no specials in the arena bitmap.
	
		spanHasSpecials marks a span as having specials in the arena bitmap.
	
		spanOf returns the span of p. If p does not point into the heap
		arena or no span has ever contained p, spanOf returns nil.
		
		If p does not point to allocated memory, this may return a non-nil
		span that does *not* contain p. If this is a possibility, the
		caller should either call spanOfHeap or check the span bounds
		explicitly.
		
		Must be nosplit because it has callers that are nosplit.
	
		spanOfHeap is like spanOf, but returns nil if p does not point to a
		heap object.
		
		Must be nosplit because it has callers that are nosplit.
	
		spanOfUnchecked is equivalent to spanOf, but the caller must ensure
		that p points into an allocated heap arena.
		
		Must be nosplit because it has callers that are nosplit.
	
		Used by reflectcall and the reflect package.
		
		Spills/loads arguments in registers to/from an internal/abi.RegArgs
		respectively. Does not follow the Go ABI.
	
		stackalloc allocates an n byte stack.
		
		stackalloc must run on the system stack because it uses per-P
		resources and must not split the stack.
	
		stackcacherefill/stackcacherelease implement a global pool of stack segments.
		The pool is required to prevent unlimited growth of per-thread caches.
	 func stackcacherelease(c *mcache, order uint8)	
		stackcheck checks that SP is in range [g->stack.lo, g->stack.hi).
	
		stackfree frees an n byte stack allocation at stk.
		
		stackfree must run on the system stack because it uses per-P
		resources and must not split the stack.
	
		stacklog2 returns ⌊log_2(n)⌋.
	 func stackmapdata(stkmap *stackmap, n int32) bitvector	
		Allocates a stack from the free pool. Must be called with
		stackpool[order].item.mu held.
	
		Adds stack x to the free pool. Must be called with stackpool[order].item.mu held.
	
		startCheckmarks prepares for the checkmarks phase.
		
		The world must be stopped.
	
		Schedules the locked m to run the locked gp.
		May run during STW, so write barriers are not allowed.
	
		Schedules some M to run the p (creates an M if necessary).
		If p==nil, tries to get an idle P, if no idle P's does nothing.
		May run with m.p==nil, so write barriers are not allowed.
		If spinning is set, the caller has incremented nmspinning and must provide a
		P. startm will set m.spinning in the newly started M.
		
		Callers passing a non-nil P must call from a non-preemptible context. See
		comment on acquirem below.
		
		Argument lockheld indicates whether the caller already acquired the
		scheduler lock. Callers holding the lock when making the call must pass
		true. The lock might be temporarily dropped, but will be reacquired before
		returning.
		
		Must not have write barriers because this may be called without a P.
	
		startpanic_m prepares for an unrecoverable panic.
		
		It returns true if panic messages should be printed, or false if
		the runtime is in bad shape and should just print stacks.
		
		It must not have write barriers even though the write barrier
		explicitly ignores writes once dying > 0. Write barriers still
		assume that g.m.p != nil, and this function may not have P
		in some contexts (e.g. a panic in a signal handler for a signal
		sent to an M with no P).
	
		startPCForTrace returns the start PC of a goroutine for tracing purposes.
		If pc is a wrapper, it returns the PC of the wrapped function. Otherwise it
		returns pc.
	
		startTemplateThread starts the template thread if it is not already
		running.
		
		The calling thread must itself be in a known-good state.
	
		startTheWorld undoes the effects of stopTheWorld.
		
		w must be the worldStop returned by stopTheWorld.
	
		startTheWorldGC undoes the effects of stopTheWorldGC.
		
		w must be the worldStop returned by stopTheWorld.
	
		reason is the same STW reason passed to stopTheWorld. start is the start
		time returned by stopTheWorld.
		
		now is the current time; prefer to pass 0 to capture a fresh timestamp.
		
		stattTheWorldWithSema returns now.
	
		stealWork attempts to steal a runnable goroutine or timer from any P.
		
		If newWork is true, new work may have been readied.
		
		If now is not 0 it is the current time. stealWork returns the passed time or
		the current time if now was passed as 0.
	
		step advances to the next pc, value pair in the encoded table.
	
		Return the bucket for stk[0:nstk], allocating new bucket if needed.
	
		Stops execution of the current m that is locked to a g until the g is runnable again.
		Returns with acquired P.
	
		Stops execution of the current m until new work is available.
		Returns with acquired P.
	
		stopTheWorld stops all P's from executing goroutines, interrupting
		all goroutines at GC safe points and records reason as the reason
		for the stop. On return, only the current goroutine's P is running.
		stopTheWorld must not be called from a system stack and the caller
		must not hold worldsema. The caller must call startTheWorld when
		other P's should resume execution.
		
		stopTheWorld is safe for multiple goroutines to call at the
		same time. Each will execute its own stop, and the stops will
		be serialized.
		
		This is also used by routines that do stack dumps. If the system is
		in panic or being exited, this may not reliably stop all
		goroutines.
		
		Returns the STW context. When starting the world, this context must be
		passed to startTheWorld.
	
		stopTheWorldGC has the same effect as stopTheWorld, but blocks
		until the GC is not running. It also blocks a GC from starting
		until startTheWorldGC is called.
	
		stopTheWorldWithSema is the core implementation of stopTheWorld.
		The caller is responsible for acquiring worldsema and disabling
		preemption first and then should stopTheWorldWithSema on the system
		stack:
		
			semacquire(&worldsema, 0)
			m.preemptoff = "reason"
			var stw worldStop
			systemstack(func() {
				stw = stopTheWorldWithSema(reason)
			})
		
		When finished, the caller must either call startTheWorld or undo
		these three operations separately:
		
			m.preemptoff = ""
			systemstack(func() {
				now = startTheWorldWithSema(stw)
			})
			semrelease(&worldsema)
		
		It is allowed to acquire worldsema once and then execute multiple
		startTheWorldWithSema/stopTheWorldWithSema pairs.
		Other P's are able to execute between successive calls to
		startTheWorldWithSema and stopTheWorldWithSema.
		Holding worldsema causes any other goroutines invoking
		stopTheWorld to block.
		
		Returns the STW context. When starting the world, this context must be
		passed to startTheWorldWithSema.
	
		stopTimer stops a timer.
		It reports whether t was stopped before being run.
	
		strhash should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/aristanetworks/goarista
		  - github.com/bytedance/sonic
		  - github.com/bytedance/go-tagexpr/v2
		  - github.com/cloudwego/dynamicgo
		  - github.com/v2fly/v2ray-core/v5
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		stringDataOnStack reports whether the string's data is
		stored on the current goroutine's stack.
	
		Testing adapters for hash quality tests (see hash_test.go)
		
		stringHash should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/k14s/starlark-go
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	 func stringStructOf(sp *string) *stringStruct	 func stringtoslicebyte(buf *tmpBuf, s string) []byte	 func stringtoslicerune(buf *[32]rune, s string) []rune	
		subtract1 returns the byte pointer p-1.
		
		nosplit because it is used during write barriers and must not be preempted.
	
		subtractb returns the byte pointer p-n.
	
		suspendG suspends goroutine gp at a safe-point and returns the
		state of the suspended goroutine. The caller gets read access to
		the goroutine until it calls resumeG.
		
		It is safe for multiple callers to attempt to suspend the same
		goroutine at the same time. The goroutine may execute between
		subsequent successful suspend operations. The current
		implementation grants exclusive access to the goroutine, and hence
		multiple callers will serialize. However, the intent is to grant
		shared read access, so please don't depend on exclusive access.
		
		This must be called from the system stack and the user goroutine on
		the current M (if any) must be in a preemptible state. This
		prevents deadlocks where two goroutines attempt to suspend each
		other and both are in non-preemptible states. There are other ways
		to resolve this deadlock, but this seems simplest.
		
		TODO(austin): What if we instead required this to be called from a
		user goroutine? Then we could deschedule the goroutine while
		waiting instead of blocking the thread. If two goroutines tried to
		suspend each other, one of them would win and the other wouldn't
		complete the suspend until it was resumed. We would have to be
		careful that they couldn't actually queue up suspend for each other
		and then both be suspended. This would also avoid the need for a
		kernel context switch in the synchronous case because we could just
		directly schedule the waiter. The context switch is unavoidable in
		the signal case.
	
		sweepone sweeps some unswept heap span and returns the number of pages returned
		to the heap, or ^uintptr(0) if there was nothing to sweep.
	
		Switch to crashstack and call fn, with special handling of
		concurrent and recursive cases.
		
		Nosplit as it is called in a bad stack condition (we know
		morestack would fail).
	 func switchToCrashStack0(fn func())	 func sync_atomic_CompareAndSwapUintptr(ptr *uintptr, old, new uintptr) bool	 func sync_atomic_StoreUintptr(ptr *uintptr, new uintptr)	 func sync_atomic_SwapUintptr(ptr *uintptr, new uintptr) uintptr	 func sync_fatal(s string)	
		Active spinning for sync.Mutex.
		
		sync_runtime_canSpin should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/livekit/protocol
		  - github.com/sagernet/gvisor
		  - gvisor.dev/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		sync_runtime_doSpin should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/livekit/protocol
		  - github.com/sagernet/gvisor
		  - gvisor.dev/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		sync_runtime_registerPoolCleanup should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/gopkg
		  - github.com/songzhibin97/gkit
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		sync_runtime_Semacquire should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gvisor.dev/gvisor
		  - github.com/sagernet/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	 func sync_runtime_SemacquireRWMutex(addr *uint32, lifo bool, skipframes int)	 func sync_runtime_SemacquireRWMutexR(addr *uint32, lifo bool, skipframes int)	
		sync_runtime_Semrelease should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gvisor.dev/gvisor
		  - github.com/sagernet/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	 func sync_throw(s string)	
		syncadjustsudogs adjusts gp's sudogs and copies the part of gp's
		stack they refer to while synchronizing with concurrent channel
		operations. It returns the number of bytes of stack copied.
	 func synctest_inBubble(sg any, f func())	 func synctest_release(sg any)	 func synctestRun(f func())	
		sysAlloc transitions an OS-chosen region of memory from None to Ready.
		More specifically, it obtains a large chunk of zeroed memory from the
		operating system, typically on the order of a hundred kilobytes
		or a megabyte. This memory is always immediately available for use.
		
		sysStat must be non-nil.
		
		Don't split the stack as this function may be invoked without a valid G,
		which prevents us from allocating more stack.
	
		Don't split the stack as this method may be invoked without a valid G, which
		prevents us from allocating more stack.
	
		wrapper for syscall package to call cgocall for libc (cgo) calls.
	 func syscall_Exit(code int)	
		Called from syscall package after Exec.
	
		Called from syscall package after fork in parent.
		
		syscall_runtime_AfterFork is for package syscall,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gvisor.dev/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		Called from syscall package after fork in child.
		It resets non-sigignored signals to the default handler, and
		restores the signal mask in preparation for the exec.
		
		Because this might be called during a vfork, and therefore may be
		temporarily sharing address space with the parent process, this must
		not change any global variables or calling into C code that may do so.
		
		syscall_runtime_AfterForkInChild is for package syscall,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gvisor.dev/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		Called from syscall package before Exec.
	
		Called from syscall package before fork.
		
		syscall_runtime_BeforeFork is for package syscall,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gvisor.dev/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		syscall_runtime_doAllThreadsSyscall and executes a specified system call on
		all Ms.
		
		The system call is expected to succeed and return the same value on every
		thread. If any threads do not match, the runtime throws.
	 func syscall_runtimeSetenv(key, value string)	
		sysFault transitions a memory region from Ready to Reserved. It
		marks a region such that it will always fault if accessed. Used only for
		debugging the runtime.
		
		TODO(mknyszek): Currently it's true that all uses of sysFault transition
		memory from Ready to Reserved, but this may not be true in the future
		since on every platform the operation is much more general than that.
		If a transition from Prepared is ever introduced, create a new function
		that elides the Ready state accounting.
	 func sysFaultOS(v unsafe.Pointer, n uintptr)	
		sysFree transitions a memory region from any state to None. Therefore, it
		returns memory unconditionally. It is used if an out-of-memory error has been
		detected midway through an allocation or to carve out an aligned section of
		the address space. It is okay if sysFree is a no-op only if sysReserve always
		returns a memory region aligned to the heap allocator's alignment
		restrictions.
		
		sysStat must be non-nil.
		
		Don't split the stack as this function may be invoked without a valid G,
		which prevents us from allocating more stack.
	
		Don't split the stack as this function may be invoked without a valid G,
		which prevents us from allocating more stack.
	
		sysHugePage does not transition memory regions, but instead provides a
		hint to the OS that it would be more efficient to back this memory region
		with pages of a larger size transparently.
	
		sysHugePageCollapse attempts to immediately back the provided memory region
		with huge pages. It is best-effort and may fail silently.
	 func sysHugePageOS(v unsafe.Pointer, n uintptr)	
		sysMap transitions a memory region from Reserved to Prepared. It ensures the
		memory region can be efficiently transitioned to Ready.
		
		sysStat must be non-nil.
	
		sysMmap calls the mmap system call. It is implemented in assembly.
	
		Always runs without a P, so write barriers are not allowed.
	
		sysMunmap calls the munmap system call. It is implemented in assembly.
	
		sysNoHugePage does not transition memory regions, but instead provides a
		hint to the OS that it would be less efficient to back this memory region
		with pages of a larger size transparently.
	 func sysNoHugePageOS(v unsafe.Pointer, n uintptr)	
		sysReserve transitions a memory region from None to Reserved. It reserves
		address space in such a way that it would cause a fatal fault upon access
		(either via permissions or not committing the memory). Such a reservation is
		thus never backed by physical memory.
		
		If the pointer passed to it is non-nil, the caller wants the
		reservation there, but sysReserve can still choose another
		location if that one is unavailable.
		
		NOTE: sysReserve returns OS-aligned memory, but the heap allocator
		may use larger alignment, so the caller must be careful to realign the
		memory obtained by sysReserve.
	
		sysReserveAligned is like sysReserve, but the returned pointer is
		aligned to align bytes. It may reserve either n or n+align bytes,
		so it returns the size that was reserved.
	
		sysSigaction calls the rt_sigaction system call.
	
		systemstack runs fn on a system stack.
		If systemstack is called from the per-OS-thread (g0) stack, or
		if systemstack is called from the signal handling (gsignal) stack,
		systemstack calls fn directly and returns.
		Otherwise, systemstack is being called from the limited stack
		of an ordinary goroutine. In this case, systemstack switches
		to the per-OS-thread stack, calls fn, and switches back.
		It is common to use a func literal as the argument, in order
		to share inputs and outputs with the code around the call
		to system stack:
		
			... set up y ...
			systemstack(func() {
				x = bigcall(y)
			})
			... use x ...
	
		sysUnused transitions a memory region from Ready to Prepared. It notifies the
		operating system that the physical pages backing this memory region are no
		longer needed and can be reused for other purposes. The contents of a
		sysUnused memory region are considered forfeit and the region must not be
		accessed again until sysUsed is called.
	 func sysUnusedOS(v unsafe.Pointer, n uintptr)	
		sysUsed transitions a memory region from Prepared to Ready. It notifies the
		operating system that the memory region is needed and ensures that the region
		may be safely accessed. This is typically a no-op on systems that don't have
		an explicit commit step and hard over-commit limits, but is critical on
		Windows, for example.
		
		This operation is idempotent for memory already in the Prepared state, so
		it is safe to refer, with v and n, to a range of memory that includes both
		Prepared and Ready memory. However, the caller must provide the exact amount
		of Prepared memory for accounting purposes.
	
		taggedPointerPack created a taggedPointer from a pointer and a tag.
		Tag bits that don't fit in the result are discarded.
	
		templateThread is a thread in a known-good state that exists solely
		to start new threads in known-good states when the calling thread
		may not be in a good state.
		
		Many programs never need this, so templateThread is started lazily
		when we first enter a state that might lead to running on a thread
		in an unknown state.
		
		templateThread runs on an M without a P, so it must not have write
		barriers.
	
		threadCreateProfileInternal returns the number of records n in the profile.
		If there are less than size records, copyFn is invoked for each record, and
		ok returns true.
	
		throw triggers a fatal error that dumps a stack trace and exits.
		
		throw should be used for runtime-internal fatal errors where Go itself,
		rather than user code, may be at fault for the failure.
		
		throw should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		  - github.com/cockroachdb/pebble
		  - github.com/dgraph-io/ristretto
		  - github.com/outcaste-io/ristretto
		  - github.com/pingcap/br
		  - gvisor.dev/gvisor
		  - github.com/sagernet/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		ticksPerSecond returns a conversion rate between the cputicks clock and the nanotime clock.
		
		Note: Clocks are hard. Using this as an actual conversion rate for timestamps is ill-advised
		and should be avoided when possible. Use only for durations, where a tiny error term isn't going
		to make a meaningful difference in even a 1ms duration. If an accurate timestamp is needed,
		use nanotime instead. (The entire Windows platform is a broad exception to this rule, where nanotime
		produces timestamps on such a coarse granularity that the error from this conversion is actually
		preferable.)
		
		The strategy for computing the conversion rate is to write down nanotime and cputicks as
		early in process startup as possible. From then, we just need to wait until we get values
		from nanotime that we can use (some platforms have a really coarse system time granularity).
		We require some amount of time to pass to ensure that the conversion rate is fairly accurate
		in aggregate. But because we compute this rate lazily, there's a pretty good chance a decent
		amount of time has passed by the time we get here.
		
		Must be called from a normal goroutine context (running regular goroutine with a P).
		
		Called by runtime/pprof in addition to runtime code.
		
		TODO(mknyszek): This doesn't account for things like CPU frequency scaling. Consider
		a more sophisticated and general approach in the future.
	 func time_runtimeNow() (sec int64, nsec int32, mono int64)	
		Poor mans 64-bit division.
		This is a very special function, do not use it if you are not sure what you are doing.
		int64 division is lowered into _divv() call on 386, which does not fit into nosplit functions.
		Handles overflow in a time-specific manner.
		This keeps us within no-split stack limits on 32-bit processors.
	
		timeHistogramMetricsBuckets generates a slice of boundaries for
		the timeHistogram. These boundaries are represented in seconds,
		not nanoseconds like the timeHistogram represents durations.
	 func timer_delete(timerid int32) int32	 func timer_settime(timerid int32, flags int32, new, old *itimerspec) int32	
		timerchandrain removes all elements in channel c's buffer.
		It reports whether any elements were removed.
		Because it is only intended for timers, it does not
		handle waiting senders at all (all timer channels
		use non-blocking sends to fill the buffer).
	
		timeSleep puts the current goroutine to sleep for at least ns nanoseconds.
	
		timeSleepUntil returns the time when the next timer should fire. Returns
		maxWhen if there are no timers.
		This is only called by sysmon and checkdead.
	
		trace_userLog emits a UserRegionBegin or UserRegionEnd event.
	
		trace_userRegion emits a UserRegionBegin or UserRegionEnd event,
		depending on mode (0 == Begin, 1 == End).
		
		TODO(mknyszek): Just make this two functions.
	
		trace_userTaskCreate emits a UserTaskCreate event.
	
		trace_userTaskEnd emits a UserTaskEnd event.
	
		traceAcquire prepares this M for writing one or more trace events.
		
		nosplit because it's called on the syscall path when stack movement is forbidden.
	
		traceAcquireEnabled is the traceEnabled path for traceAcquire. It's explicitly
		broken out to make traceAcquire inlineable to keep the overhead of the tracer
		when it's disabled low.
		
		nosplit because it's called by traceAcquire, which is nosplit.
	
		traceAdvance moves tracing to the next generation, and cleans up the current generation,
		ensuring that it's flushed out before returning. If stopTrace is true, it disables tracing
		altogether instead of advancing to the next generation.
		
		traceAdvanceSema must not be held.
		
		traceAdvance is called by golang.org/x/exp/trace using linkname.
	
		traceAllocFreeEnabled returns true if the trace is currently enabled
		and alloc/free events are also enabled.
	 func traceback1(pc, sp, lr uintptr, gp *g, flags unwindFlags)	
		traceback2 prints a stack trace starting at u. It skips the first "skip"
		logical frames, after which it prints at most "max" logical frames. It
		returns n, which is the number of logical frames skipped and printed, and
		lastN, which is the number of logical frames skipped or printed just in the
		physical frame that u references.
	
		tracebackHexdump hexdumps part of stk around frame.sp and frame.fp
		for debugging purposes. If the address bad is included in the
		hexdumped range, it will mark it as well.
	 func tracebackothers(me *g)	
		tracebackPCs populates pcBuf with the return addresses for each frame from u
		and returns the number of PCs written to pcBuf. The returned PCs correspond
		to "logical frames" rather than "physical frames"; that is if A is inlined
		into B, this will still return a PCs for both A and B. This also includes PCs
		generated by the cgo unwinder, if one is registered.
		
		If skip != 0, this skips this many logical frames.
		
		Callers should set the unwindSilentErrors flag on u.
	
		tracebacktrap is like traceback but expects that the PC and SP were obtained
		from a trap, not from gp->sched or gp->syscallpc/gp->syscallsp or GetCallerPC/GetCallerSP.
		Because they are from a trap instead of from a saved pair,
		the initial PC must not be rewound to the previous instruction.
		(All the saved pairs record a PC that is a return address, so we
		rewind it into the CALL instruction.)
		If gp.m.libcall{g,pc,sp} information is available, it uses that information in preference to
		the pc/sp/lr passed in.
	
		traceBufFlush flushes a trace buffer.
		
		Must run on the system stack because trace.lock must be held.
	
		traceClockNow returns a monotonic timestamp. The clock this function gets
		the timestamp from is specific to tracing, and shouldn't be mixed with other
		clock sources.
		
		nosplit because it's called from exitsyscall and various trace writing functions,
		which are nosplit.
		
		traceClockNow is called by golang.org/x/exp/trace using linkname.
	
		traceClockUnitsPerSecond estimates the number of trace clock units per
		second that elapse.
	
		traceCompressStackSize assumes size is a power of 2 and returns log2(size).
	
		traceCPUFlush flushes trace.cpuBuf[gen%2]. The caller must be certain that gen
		has completed and that there are no more writers to it.
	
		traceCPUSample writes a CPU profile sample stack to the execution tracer's
		profiling buffer. It is called from a signal handler, so is limited in what
		it can do. mp must be the thread that is currently stopped in a signal.
	
		traceEnabled returns true if the trace is currently enabled.
	
		traceExitedSyscall marks a goroutine as having exited the syscall slow path.
	
		traceExitingSyscall marks a goroutine as exiting the syscall slow path.
		
		Must be paired with a traceExitedSyscall call.
	
		tracefpunwindoff returns true if frame pointer unwinding for the tracer is
		disabled via GODEBUG or not supported by the architecture.
	
		traceFrequency writes a batch with a single EvFrequency event.
		
		freq is the number of trace clock units per second.
	
		traceGoroutineStackID creates a trace ID for the goroutine stack from its base address.
	
		traceHeapObjectID creates a trace ID for a heap object at address addr.
	
		traceInitReadCPU initializes CPU profile -> tracer state for tracing.
		
		Returns a profBuf for reading from.
	
		traceLockInit initializes global trace locks.
	 func traceNextGen(gen uintptr) uintptr	
		traceReadCPU attempts to read from the provided profBuf[gen%2] and write
		into the trace. Returns true if there might be more to read or false
		if the profBuf is closed or the caller should otherwise stop reading.
		
		The caller is responsible for ensuring that gen does not change. Either
		the caller must be in a traceAcquire/traceRelease block, or must be calling
		with traceAdvanceSema held.
		
		No more than one goroutine may be in traceReadCPU for the same
		profBuf at a time.
		
		Must not run on the system stack because profBuf.read performs race
		operations.
	
		traceReader returns the trace reader that should be woken up, if any.
		Callers should first check (traceEnabled() || traceShuttingDown()).
		
		This must run on the system stack because it acquires trace.lock.
	
		traceReaderAvailable returns the trace reader if it is not currently
		scheduled and should be. Callers should first check that
		(traceEnabled() || traceShuttingDown()) is true.
	
		traceRegisterLabelsAndReasons re-registers mark worker labels and
		goroutine stop/block reasons in the string table for the provided
		generation. Note: the provided generation must not have started yet.
	
		traceRelease indicates that this M is done writing trace events.
		
		nosplit because it's called on the syscall path when stack movement is forbidden.
	
		traceShuttingDown returns true if the trace is currently shutting down.
	
		traceSnapshotMemory takes a snapshot of all runtime memory that there are events for
		(heap spans, heap objects, goroutine stacks, etc.) and writes out events for them.
		
		The world must be stopped and tracing must be enabled when this function is called.
	
		traceSpanID creates a trace ID for the span s for the trace.
	
		traceStack captures a stack trace from a goroutine and registers it in the trace
		stack table. It then returns its unique ID. If gp == nil, then traceStack will
		attempt to use the current execution context.
		
		skip controls the number of leaf frames to omit in order to hide tracer internals
		from stack traces, see CL 5523.
		
		Avoid calling this function directly. gen needs to be the current generation
		that this stack trace is being written out for, which needs to be synchronized with
		generations moving forward. Prefer traceEventWriter.stack.
	
		traceStartReadCPU creates a goroutine to start reading CPU profile
		data into an active trace.
		
		traceAdvanceSema must be held.
	
		traceStopReadCPU blocks until the trace CPU reading goroutine exits.
		
		traceAdvanceSema must be held, and tracing must be disabled.
	
		traceThreadDestroy is called when a thread is removed from
		sched.freem.
		
		mp must not be able to emit trace events anymore.
		
		sched.lock must be held to synchronize with traceAdvance.
	
		trygetfull tries to get a full or partially empty workbuffer.
		If one is not immediately available return nil.
	
		tryRecordGoroutineProfile ensures that gp1 has the appropriate representation
		in the current goroutine profile: either that it should not be profiled, or
		that a snapshot of its call stack and labels are now in the profile.
	
		tryRecordGoroutineProfileWB asserts that write barriers are allowed and calls
		tryRecordGoroutineProfile.
	
		typeAssert builds an itab for the concrete type t and the
		interface type s.Inter. If the conversion is not possible it
		panics if s.CanFail is false and returns nil if s.CanFail is true.
	
		typeBitsBulkBarrier executes a write barrier for every
		pointer that would be copied from [src, src+size) to [dst,
		dst+size) by a memmove using the type bitmap to locate those
		pointer slots.
		
		The type typ must correspond exactly to [src, src+size) and [dst, dst+size).
		dst, src, and size must be pointer-aligned.
		
		Must not be preempted because it typically runs right before memmove,
		and the GC must observe them as an atomic action.
		
		Callers must perform cgo checks if goexperiment.CgoCheck2.
	
		typedmemclr clears the typed memory at ptr with type typ. The
		memory at ptr must already be initialized (and hence in type-safe
		state). If the memory is being initialized for the first time, see
		memclrNoHeapPointers.
		
		If the caller knows that typ has pointers, it can alternatively
		call memclrHasPointers.
		
		TODO: A "go:nosplitrec" annotation would be perfect for this.
	
		typedmemmove copies a value of type typ to dst from src.
		Must be nosplit, see #16026.
		
		TODO: Perfect for go:nosplitrec since we can't have a safe point
		anywhere in the bulk barrier or memmove.
		
		typedmemmove should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/RomiChan/protobuf
		  - github.com/segmentio/encoding
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		typedslicecopy should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/segmentio/encoding
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		typehash computes the hash of the object of type t at address p.
		h is the seed.
		This function is seldom used. Most maps use for hashing either
		fixed functions (e.g. f32hash) or compiler-generated functions
		(e.g. for a type like struct { x, y string }). This implementation
		is slower but more general and is used for hashing interface types
		(called from interhash or nilinterhash, above) or for hashing in
		maps generated by reflect.MapOf (reflect_typehash, below).
		Note: this function must match the compiler generated
		functions exactly. See issue 37716.
		
		typehash should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/puzpuzpuz/xsync/v2
		  - github.com/puzpuzpuz/xsync/v3
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		typelinksinit scans the types from extra modules and builds the
		moduledata typemap used to de-duplicate type pointers.
	
		typesEqual reports whether two types are equal.
		
		Everywhere in the runtime and reflect packages, it is assumed that
		there is exactly one *_type per Go type, so that pointer equality
		can be used to test if types are equal. There is one place that
		breaks this assumption: buildmode=shared. In this case a type can
		appear as two different pieces of memory. This is hidden from the
		runtime and reflect package by the per-module typemap built in
		typelinksinit. It uses typesEqual to map types from later modules
		back into earlier ones.
		
		Only typelinksinit needs this function.
	
		unblocksig removes sig from the current thread's signal mask.
		This is nosplit and nowritebarrierrec because it is called from
		dieFromSignal, which can be called by sigfwdgo while running in the
		signal handler, on the signal stack, with no g available.
	
		unblockTimerChan is called when a channel op that was blocked on c
		is no longer blocked. Every call to blockTimerChan must be paired with
		a call to unblockTimerChan.
		The caller holds the channel lock for c and possibly other channels.
		unblockTimerChan removes c from the timer heap when nothing is
		blocked on it anymore.
	 func unique_runtime_registerUniqueMapCleanup(f func())	
		We might not be holding a p in this code.
	
		unlock2Wake updates the list of Ms waiting on l, waking an M if necessary.
	 func unlockextra(mp *m, delta int32)	 func unlockWithRank(l *mutex)	
		Called from dropm to undo the effect of an minit.
	
		unminitSignals is called from dropm, via unminit, to undo the
		effect of calling minit on a non-Go thread.
	
		unpackScavChunkData unpacks a scavChunkData from a uint64.
	
		The linker redirects a reference of a method that it determined
		unreachable to a reference to this function, so it will throw if
		ever called.
	
		Keep this code in sync with cmd/compile/internal/walk/builtin.go:walkUnsafeSlice
	
		Keep this code in sync with cmd/compile/internal/walk/builtin.go:walkUnsafeSlice
	 func unsafestring(ptr unsafe.Pointer, len int)	
		Keep this code in sync with cmd/compile/internal/walk/builtin.go:walkUnsafeString
	 func unsafestringcheckptr(ptr unsafe.Pointer, len64 int64)	
		unsafeTraceExpWriter produces a traceWriter for experimental trace batches
		that doesn't lock the trace. Data written to experimental batches need not
		conform to the standard trace format.
		
		It should only be used in contexts where either:
		- Another traceLocker is held.
		- trace.gen is prevented from advancing.
		
		This does not have the same stack growth restrictions as traceLocker.writer.
		
		buf may be nil.
	
		unsafeTraceWriter produces a traceWriter that doesn't lock the trace.
		
		It should only be used in contexts where either:
		- Another traceLocker is held.
		- trace.gen is prevented from advancing.
		
		This does not have the same stack growth restrictions as traceLocker.writer.
		
		buf may be nil.
	
		Update the C environment if cgo is loaded.
	
		userArenaChunkReserveBytes returns the amount of additional bytes to reserve for
		heap metadata.
	
		userArenaHeapBitsSetSliceType is the equivalent of heapBitsSetType but for
		Go slice backing store values allocated in a user arena chunk. It sets up the
		heap bitmap for n consecutive values with type typ allocated at address ptr.
	
		userArenaHeapBitsSetType is the equivalent of heapSetType but for
		non-slice-backing-store Go values allocated in a user arena chunk. It
		sets up the type metadata for the value with type typ allocated at address ptr.
		base is the base address of the arena chunk.
	
		usesLibcall indicates whether this runtime performs system calls
		via libcall.
	 func usleep_no_g(usec uint32)	
		validSIGPROF compares this signal delivery's code against the signal sources
		that the profiler uses, returning whether the delivery should be processed.
		To be processed, a signal delivery from a known profiling mechanism should
		correspond to the best profiling mechanism available to this thread. Signals
		from other sources are always considered valid.
	
		values for implementing maps.values
	 func vdsoFindVersion(info *vdsoInfo, ver *vdsoVersionKey) int32	 func vdsoInitFromSysinfoEhdr(info *vdsoInfo, hdr *elfEhdr)	 func vdsoParseSymbols(info *vdsoInfo, version int32)	
		This is exported for use in internal/syscall/unix as well as x/sys/unix.
	
		Free vgetrandom state from the M (if any) prior to destroying the M.
		
		This may allocate, so it must have a P.
	
		wakeNetPoller wakes up the thread sleeping in the network poller if it isn't
		going to wake up before the when argument; or it wakes an idle P to service
		timers and the network poller if there isn't one already.
	
		Tries to add one more P to execute G's.
		Called when a G is made runnable (newproc, ready).
		Must be called with a P.
		
		wakep should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - gvisor.dev/gvisor
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		wantAsyncPreempt returns whether an asynchronous preemption is
		queued for gp.
	
		wbBufFlush flushes the current P's write barrier buffer to the GC
		workbufs.
		
		This must not have write barriers because it is part of the write
		barrier implementation.
		
		This and everything it calls must be nosplit because 1) the stack
		contains untyped slots from gcWriteBarrier and 2) there must not be
		a GC safe point between the write barrier test in the caller and
		flushing the buffer.
		
		TODO: A "go:nosplitrec" annotation would be perfect for this.
	
		wbBufFlush1 flushes p's write barrier buffer to the GC work queue.
		
		This must not have write barriers because it is part of the write
		barrier implementation, so this may lead to infinite loops or
		buffer corruption.
		
		This must be non-preemptible because it uses the P's workbuf.
	
		wbMove performs the write barrier operations necessary before
		copying a region of memory from src to dst of type typ.
		Does not actually do the copying.
	
		wbZero performs the write barrier operations necessary before
		zeroing a region of memory at address dst of type typ.
		Does not actually do the zeroing.
	
		wirep is the first step of acquirep, which actually associates the
		current M to pp. This is broken out so we can disallow write
		barriers for this part, since we don't yet have a P.
	
		write must be nosplit on Windows (see write1)
	
		write1 calls the write system call.
		It returns a non-negative number of bytes written or a negative errno value.
	
		writeErrData is the common parts of writeErr{,Str}.
	
		writeErrStr writes a string to descriptor 2.
		If SetCrashOutput(f) was called, it also writes to f.
	 func writeheapdump_m(fd uintptr, m *MemStats)
Package-Level Variables (total 317, in which 1 is exported)
	
		MemProfileRate controls the fraction of memory allocations
		that are recorded and reported in the memory profile.
		The profiler aims to sample an average of
		one allocation per MemProfileRate bytes allocated.
		
		To include every allocated block in the profile, set MemProfileRate to 1.
		To turn off profiling entirely, set MemProfileRate to 0.
		
		The tools that process the memory profiles assume that the
		profile rate is constant across the lifetime of the program
		and equal to the current value. Programs that change the
		memory profiling rate should do so just once, as early as
		possible in the execution of the program (for example,
		at the beginning of main).
		
		_cgo_mmap is filled in by runtime/cgo when it is linked into the
		program, so it is only non-nil when using cgo.
	
		_cgo_munmap is filled in by runtime/cgo when it is linked into the
		program, so it is only non-nil when using cgo.
	
		_cgo_setenv should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ebitengine/purego
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		_cgo_sigaction is filled in by runtime/cgo when it is linked into the
		program, so it is only non-nil when using cgo.
	
		_cgo_unsetenv should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ebitengine/purego
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	  var addrspace_vec [1]byte	
		used in asm_{386,amd64,arm64}.s to seed the hash function
	
		agg is used by readMetrics, and is protected by metricsSema.
		
		Managed as a global variable because its pointer will be
		an argument to a dynamically-defined function, and we'd
		like to avoid it escaping to the heap.
	
		aixStaticDataBase (used only on AIX) holds the unrelocated address
		of the data section, set by the linker.
		
		On AIX, an R_ADDR relocation from an RODATA symbol to a DATA symbol
		does not work, as the dynamic loader can change the address of the
		data section, and it is not possible to apply a dynamic relocation
		to RODATA. In order to get the correct address, we need to apply
		the delta between unrelocated and relocated data section addresses.
		aixStaticDataBase is the unrelocated address, and moduledata.data is
		the relocated one.
	
		allDloggers is a list of all dloggers, linked through
		dlogger.allLink. This is accessed atomically. This is prepend only,
		so it doesn't need to protect against ABA races.
	
		allglen and allgptr are atomic variables that contain len(allgs) and
		&allgs[0] respectively. Proper ordering depends on totally-ordered
		loads and stores. Writes are protected by allglock.
		
		allgptr is updated before allglen. Readers should read allglen
		before allgptr to ensure that allglen is always <= len(allgptr). New
		Gs appended during the race can be missed. For a consistent view of
		all Gs, allglock must be held.
		
		allgptr copies should always be stored as a concrete type or
		unsafe.Pointer, not uintptr, to ensure that GC can still reach it
		even if it points to a stale array.
	
		allgs contains all Gs ever created (including dead Gs), and thus
		never shrinks.
		
		Access via the slice is protected by allglock or stop-the-world.
		Readers that cannot take the lock may (carefully!) use the atomic
		variables below.
	
		allocmLock is locked for read when creating new Ms in allocm and their
		addition to allm. Thus acquiring this lock for write blocks the
		creation of new Ms.
	
		len(allp) == gomaxprocs; may change at safe points, otherwise
		immutable.
	
		allpLock protects P-less reads and size changes of allp, idlepMask,
		and timerpMask, and all writes to allp.
	
		asyncPreemptStack is the bytes of stack space required to inject an
		asyncPreempt call.
	
		auxv is populated on relevant platforms but defined here for all platforms
		so x/sys/cpu can assume the getAuxv symbol exists without keeping its list
		of auxv-using GOOS build tags in sync.
		
		It contains an even number of elements, (tag, value) pairs.
	  var auxvreadbuf [128]uintptr	  var bbuckets atomic.UnsafePointer // *bucket, blocking profile buckets	  var blockprofilerate uint64 // in CPU ticks	  var boringCaches []unsafe.Pointer // for crypto/internal/boring	
		boundsErrorFmts provide error text for various out-of-bounds panics.
		Note: if you change these strings, you should adjust the size of the buffer
		in boundsError.Error below as well.
	
		boundsNegErrorFmts are overriding formats if x is negative. In this case there's no need to report y.
	  var buckhash atomic.UnsafePointer // *buckhashArray	
		buildVersion is the Go tree's version string at build time.
		
		If any GOEXPERIMENTs are set to non-default values, it will include
		"X:<GOEXPERIMENT>".
		
		This is set by the linker.
		
		This is accessed by "go version <binary>".
	
		casgstatusAlwaysTrack is a debug flag that causes casgstatus to always track
		various latencies on every transition instead of sampling them.
	
		cgoAlwaysFalse is a boolean value that is always false.
		The cgo-generated code says if cgoAlwaysFalse { cgoUse(p) },
		or if cgoAlwaysFalse { cgoKeepAlive(p) }.
		The compiler cannot see that cgoAlwaysFalse is always false,
		so it emits the test and keeps the call, giving the desired
		escape/alive analysis result. The test is cheaper than the call.
	
		cgoHasExtraM is set on startup when an extra M is created for cgo.
		The extra M must be created before any C/C++ code calls cgocallback.
	
		When running with cgo, we call _cgo_thread_start
		to start threads for us so that we can play nicely with
		foreign code.
	  var class_to_size [68]uint16	
		crashFD is an optional file descriptor to use for fatal panics, as
		set by debug.SetCrashOutput (see #42888). If it is a valid fd (not
		all ones), writeErr and related functions write to it in addition
		to standard error.
		
		Initialized to -1 in schedinit.
	
		crashing is the number of m's we have waited for when implementing
		GOTRACEBACK=crash when a signal is received.
	
		Holds variables parsed from GODEBUG env var,
		except for "memprofilerate" since there is an
		existing int var for that value, which may
		already have an initial value.
	
		debugPinnerKeepUnpin is used to make runtime.(*Pinner).Unpin reachable.
	  var debugPtrmask struct{lock mutex; data *byte}	  var defaultGOROOT string // set by cmd/link	
		disableMemoryProfiling is set by the linker if memory profiling
		is not used and the link type guarantees nobody else could use it
		elsewhere.
		We check if the runtime.memProfileInternal symbol is present.
	
		channels for synchronizing signal mask updates with the signal mask
		thread
	
		doubleCheckReadMemStats controls a double-check mode for ReadMemStats that
		ensures consistency between the values that ReadMemStats is using and the
		runtime-internal stats.
	
		Empty interface switch cache. Contains one entry with a nil Typ (which
		causes a cache lookup to fail immediately.)
	
		dummy mspan that contains no free objects.
	
		Empty type assert cache. Contains one entry with a nil Typ (which
		causes a cache lookup to fail immediately.)
	
		channels for synchronizing signal mask updates with the signal mask
		thread
	
		execLock serializes exec and clone to avoid bugs or unspecified
		behaviour around exec'ing while creating/destroying threads. See
		issue #19546.
	
		Locking linked list of extra M's, via mp.schedlink. Must be accessed
		only via lockextra/unlockextra.
		
		Can't be atomic.Pointer[m] because we use an invalid pointer as a
		"locked" sentinel value. M's on this list remain visible to the GC
		because their mp.curg is on allgs.
	
		Number of extra M's in use by threads.
	
		Number of M's in the extraM list.
	
		Number of waiters in lockextra.
	
		faketime is the simulated time in nanoseconds since 1970 for the
		playground.
		
		Zero means not to use faketime.
	  var fastlog2Table [33]float64	  var finalizer1 [5]byte	
		This runs durring the GC sweep phase. Heap memory can't be allocated while sweep is running.
	
		This runs durring the GC sweep phase. Heap memory can't be allocated while sweep is running.
	
		This runs durring the GC sweep phase. Heap memory can't be allocated while sweep is running.
	
		This runs durring the GC sweep phase. Heap memory can't be allocated while sweep is running.
	
		This runs durring the GC sweep phase. Heap memory can't be allocated while sweep is running.
	  var firstmoduledata moduledata // linker symbol	
		forcegcperiod is the maximum time in nanoseconds between garbage
		collections. If we go this long without a garbage collection, one
		is forced to run.
		
		This is a variable for testing purposes. It normally doesn't change.
	
		Bit vector of free marks.
		Needs to be as big as the largest number of objects per span.
	
		freezing is set to non-zero if the runtime is trying to freeze the
		world.
	
		Stores the signal handlers registered before Go installed its own.
		These signal handlers will be invoked in cases where Go doesn't want to
		handle a particular signal (e.g., signal occurred on a non-Go thread).
		See sigfwdgo for more information on when the signals are forwarded.
		
		This is read by the signal handler; accesses should use
		atomic.Loaduintptr and atomic.Storeuintptr.
	
		Total number of gcBgMarkWorker goroutines. Protected by worldsema.
	
		Pool of GC parked background workers. Entries are type
		*gcBgMarkWorkerNode.
	  var gcBitsArenas struct{lock mutex; free *gcBitsArena; next *gcBitsArena; current *gcBitsArena; previous *gcBitsArena}	
		gcBlackenEnabled is 1 if mutator assists and background mark
		workers are allowed to blacken objects. This must only be set when
		gcphase == _GCmark.
	
		gcController implements the GC pacing controller that determines
		when to trigger concurrent garbage collection and how much marking
		work to do in mutator assists and background marking.
		
		It calculates the ratio between the allocation rate (in terms of CPU
		time) and the GC scan throughput to determine the heap size at which to
		trigger a GC cycle such that no GC assists are required to finish on time.
		This algorithm thus optimizes GC CPU utilization to the dedicated background
		mark utilization of 25% of GOMAXPROCS by minimizing GC assists.
		GOMAXPROCS. The high-level design of this algorithm is documented
		at https://github.com/golang/proposal/blob/master/design/44167-gc-pacer-redesign.md.
		See https://golang.org/s/go15gcpacing for additional historical context.
	
		gcCPULimiter is a mechanism to limit GC CPU utilization in situations
		where it might become excessive and inhibit application progress (e.g.
		a death spiral).
		
		The core of the limiter is a leaky bucket mechanism that fills with GC
		CPU time and drains with mutator time. Because the bucket fills and
		drains with time directly (i.e. without any weighting), this effectively
		sets a very conservative limit of 50%. This limit could be enforced directly,
		however, but the purpose of the bucket is to accommodate spikes in GC CPU
		utilization without hurting throughput.
		
		Note that the bucket in the leaky bucket mechanism can never go negative,
		so the GC never gets credit for a lot of CPU time spent without the GC
		running. This is intentional, as an application that stays idle for, say,
		an entire day, could build up enough credit to fail to prevent a death
		spiral the following day. The bucket's capacity is the GC's only leeway.
		
		The capacity thus also sets the window the limiter considers. For example,
		if the capacity of the bucket is 1 cpu-second, then the limiter will not
		kick in until at least 1 full cpu-second in the last 2 cpu-second window
		is spent on GC CPU time.
	
		gcDebugMarkDone contains fields used to debug/test mark termination.
	
		gcMarkDoneFlushed counts the number of P's with flushed work.
		
		Ideally this would be a captured local in gcMarkDone, but forEachP
		escapes its callback closure, so it can't capture anything.
		
		This is protected by markDoneSema.
	
		gcMarkWorkerModeStrings are the strings labels of gcMarkWorkerModes
		to use in execution traces.
	
		Garbage collector phase.
		Indicates to write barrier and synchronization task to perform.
	
		gcrash is a fake g that can be used when crashing due to bad
		stack conditions.
	
		Holding gcsema grants the M the right to block a GC, and blocks
		until the current GC is done. In particular, it prevents gomaxprocs
		from changing concurrently.
		
		TODO(mknyszek): Once gomaxprocs and the execution tracer can handle
		being changed/enabled during a GC, remove this.
	  var globalAlloc struct{mutex; persistentAlloc}	
		globalRand holds the global random state.
		It is only used at startup and for creating new m's.
		Otherwise the per-m random state should be used
		by calling goodrand.
	
		set by cmd/link on arm systems
		accessed using linkname by internal/runtime/atomic.
		
		goarm should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/creativeprojects/go-selfupdate
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		set by cmd/link on arm systems
		accessed using linkname by internal/runtime/atomic.
		
		goarm should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/creativeprojects/go-selfupdate
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	  var godebugEnv atomic.Pointer[string] // set by parsedebugvars	  var godebugNewIncNonDefault atomic.Pointer[func(string) func()]	  var goroutineProfile struct{sema uint32; active bool; offset atomic.Int64; records []profilerecord.StackRecord; labels []Pointer}	  var gStatusStrings [10]string	
		handlingSig is indexed by signal number and is non-zero if we are
		currently handling the signal. Or, to put it another way, whether
		the signal handler is currently set to the Go signal handler or not.
		This is uint32 rather than bool so that we can use atomic instructions.
	
		used in hash{32,64}.go to seed the hash function
	
		Bitmask of Ps in _Pidle list, one bit per P. Reads and writes must
		be atomic. Length may change at safe points.
		
		Each P must update only its own bit. In order to maintain
		consistency, a P going idle must the idle mask simultaneously with
		updates to the idle P list under the sched.lock, otherwise a racing
		pidleget may clear the mask before pidleput sets the mask,
		corrupting the bitmap.
		
		N.B., procresize takes ownership of all Ps in stopTheWorldWithSema.
	
		inForkedChild is true while manipulating signals in the child process.
		This is used to avoid calling libc functions in case we are using vfork.
	
		Value to use for signal mask for newly created M's.
	
		inittrace stores statistics for init functions which are
		updated by malloc and newproc when active is true.
	
		inProgress is a byte whose address is a sentinel indicating that
		some thread is currently building the GC bitmask for a type.
	
		intArgRegs is used by the various register assignment
		algorithm implementations in the runtime. These include:.
		- Finalizers (mfinal.go)
		- Windows callbacks (syscall_windows.go)
		
		Both are stripped-down versions of the algorithm since they
		only have to deal with a subset of cases (finalizers only
		take a pointer or interface argument, Go Windows callbacks
		don't support floating point).
		
		It should be modified with care and are generally only
		modified when testing this package.
		
		It should never be set higher than its internal/abi
		constant counterparts, because the system relies on a
		structure that is at least large enough to hold the
		registers the system supports.
		
		Protected by finlock.
	
		Set by the linker so the runtime can determine the buildmode.
	
		iscgo is set to true by the runtime/cgo package
		
		iscgo should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ebitengine/purego
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		isIdleInSynctest indicates that a goroutine is considered idle by synctest.Wait.
	
		Set by the linker so the runtime can determine the buildmode.
	
		isWaitingForSuspendG indicates that a goroutine is only entering _Gwaiting and
		setting a waitReason because it needs to be able to let the suspendG
		(used by the GC and the execution tracer) take ownership of its stack.
		The G is always actually executing on the system stack in these cases.
		
		TODO(mknyszek): Consider replacing this with a new dedicated G status.
	  var itabTable *itabTableType // pointer to current table	  var itabTableInit itabTableType // starter table	
		lastmoduledatap should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issues/67401.
		See go.dev/issues/71672.
	
		levelBits is the number of bits in the radix for a given level in the super summary
		structure.
		
		The sum of all the entries of levelBits should equal heapAddrBits.
	
		levelLogPages is log2 the maximum number of runtime pages in the address space
		a summary in the given level represents.
		
		The leaf level always represents exactly log2 of 1 chunk's worth of pages.
	
		levelShift is the number of bits to shift to acquire the radix for a given level
		in the super summary structure.
		
		With levelShift, one can compute the index of the summary at level l related to a
		pointer p by doing:
		
			p >> levelShift[l]
	
		lockNames gives the names associated with each of the above ranks.
	
		lockPartialOrder is the transitive closure of the lock rank graph.
		An entry for rank X lists all of the ranks that can already be held
		when rank X is acquired.
		
		Lock ranks that allow self-cycles list themselves.
	
		main_init_done is a signal used by cgocallbackg that initialization
		has been completed. It is made before _cgo_notify_runtime_init_done,
		so all cgo calls can rely on it existing. When main_init is complete,
		it is closed, meaning cgocallbackg can reliably receive from it.
	
		mainStarted indicates that the main M has started.
	
		channels for synchronizing signal mask updates with the signal mask
		thread
	
		maxOffAddr is the maximum address in the offset address
		space. It corresponds to the highest virtual address representable
		by the page alloc chunk and heap arena maps.
	  var maxstacksize uintptr // enough until runtime.main sets it for real	  var mbuckets atomic.UnsafePointer // *bucket, memory profile buckets	  var methodValueCallFrameObjs [1]stackObjectRecord // initialized in stackobjectinit	  var metrics map[string]metricData	
		metrics is a map of runtime/metrics keys to data used by the runtime
		to sample each metric's value. metricsInit indicates it has been
		initialized.
		
		These fields are protected by metricsSema which should be
		locked/unlocked with metricsLock() / metricsUnlock().
	  var minhexdigits int // protected by printlock	
		minOffAddr is the minimum address in the offset space, and
		it corresponds to the virtual address arenaBaseOffset.
	
		set using cmd/go/internal/modload.ModInfoProg
	  var modulesSlice *[]*moduledata // see activeModules	
		mSpanStateNames are the names of the span states, indexed by
		mSpanState.
	  var mutexprofilerate uint64 // fraction sampled	
		needSysmonWorkaround is true if the workaround for
		golang.org/issue/42515 is needed on NetBSD.
	  var netpollEventFd uintptr // eventfd for netpollBreak	  var netpollWakeSig atomic.Uint32 // used to avoid duplicate calls of netpollBreak	
		newmHandoff contains a list of m structures that need new OS threads.
		This is used by newm in situations where newm itself can't safely
		start an OS thread.
	
		ptrmask for an allocation containing a single pointer.
	  var overflowTag [1]unsafe.Pointer // always nil	
		overrideWrite allows write to be redirected externally, by
		linkname'ing this and set it to a write function.
		
		overrideWrite should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - golang.zx2c4.com/wireguard/windows
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		panicking is non-zero when crashing the program for an unrecovered panic.
	
		paniclk is held while printing the panic information and stack trace,
		so that two concurrent panics don't overlap their output.
	
		pendingPreemptSignals is the number of preemption signals
		that have been sent but not received. This is only used on Darwin.
		For #41702.
	
		persistentChunks is a list of all the persistent chunks we have
		allocated. The list is maintained through the first word in the
		persistent chunk. This is updated atomically.
	
		perThreadSyscall is the system call to execute for the ongoing
		doAllThreadsSyscall.
		
		perThreadSyscall may only be written while mp.needPerThreadSyscall == 0 on
		all Ms.
	
		physHugePageSize is the size in bytes of the OS's default physical huge
		page size whose allocation is opaque to the application. It is assumed
		and verified to be a power of two.
		
		If set, this must be set by the OS init code (typically in osinit) before
		mallocinit. However, setting it at all is optional, and leaving the default
		value is always safe (though potentially less efficient).
		
		Since physHugePageSize is always assumed to be a power of two,
		physHugePageShift is defined as physHugePageSize == 1 << physHugePageShift.
		The purpose of physHugePageShift is to avoid doing divisions in
		performance critical functions.
	
		physHugePageSize is the size in bytes of the OS's default physical huge
		page size whose allocation is opaque to the application. It is assumed
		and verified to be a power of two.
		
		If set, this must be set by the OS init code (typically in osinit) before
		mallocinit. However, setting it at all is optional, and leaving the default
		value is always safe (though potentially less efficient).
		
		Since physHugePageSize is always assumed to be a power of two,
		physHugePageShift is defined as physHugePageSize == 1 << physHugePageShift.
		The purpose of physHugePageShift is to avoid doing divisions in
		performance critical functions.
	
		physPageSize is the size in bytes of the OS's physical pages.
		Mapping and unmapping operations must be done at multiples of
		physPageSize.
		
		This must be set by the OS init code (typically in osinit) before
		mallocinit.
	
		pinnedTypemaps are the map[typeOff]*_type from the moduledata objects.
		
		These typemap objects are allocated at run time on the heap, but the
		only direct reference to them is in the moduledata, created by the
		linker and marked SNOPTRDATA so it is ignored by the GC.
		
		To make sure the map isn't collected, we keep a second reference here.
	
		to be able to test that the GC panics when a pinned pointer is leaking, this
		panic function is a variable, that can be overwritten by a test.
	  var poolcleanup ()	
		printBacklog is a circular buffer of messages written with the builtin
		print* functions, for use in postmortem analysis of core dumps.
	
		Information about what cpu features are available.
		Packages outside the runtime should not use these
		as they are not an external api.
		Set on startup in asm_{386,amd64}.s
	
		profBlockLock protects the contents of every blockRecord struct
	
		profInsertLock protects changes to the start of all *bucket linked lists
	
		profMemActiveLock protects the active field of every memRecord struct
	
		profMemFutureLock is a set of locks that protect the respective elements
		of the future array of every memRecord struct
	  var racecgosync uint64 // represents possible synchronization in C code	
		reflectOffs holds type offsets defined at run time by the reflect package.
		
		When a type is defined at run time, its *rtype data lives on the heap.
		There are a wide range of possible addresses the heap may use, that
		may not be representable as a 32-bit offset. Moreover the GC may
		one day start moving heap memory, in which case there is no stable
		offset that can be defined.
		
		To provide stable offsets, we add pin *rtype objects in a global map
		and treat the offset as an identifier. We use negative offsets that
		do not overlap with any compile-time module offsets.
		
		Entries are created by reflect.addReflectOff.
	
		runningPanicDefers is non-zero while running deferred functions for panic.
		This is used to try hard to get a panic stack trace out when exiting.
	
		This slice records the initializing tasks that need to be
		done to start up the runtime. It is built by the linker.
	
		runtimeInitTime is the nanotime() at which the runtime started.
	  var scavenge struct{gcPercentGoal atomic.Uint64; memoryLimitGoal atomic.Uint64; assistTime atomic.Int64; backgroundTime atomic.Int64}	
		Sleep/wait state of the background scavenger.
	
		secureMode holds the value of AT_SECURE passed in the auxiliary vector.
	
		set_crosscall2 is set by the runtime/cgo package
		set_crosscall2 should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ebitengine/purego
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		sig handles communication between the signal handler and os/signal.
		Other than the inuse and recv fields, the fields are accessed atomically.
		
		The wanted and ignored fields are only written by one goroutine at
		a time; access is controlled by the handlers Mutex in os/signal.
		The fields are only read by that one goroutine and by the signal handler.
		We access them atomically to minimize the race between setting them
		in the goroutine calling os/signal and the signal handler,
		which may be running in a different thread. That race is unavoidable,
		as there is no connection between handling a signal and receiving one,
		but atomic instructions should minimize it.
	
		If the signal handler receives a SIGPROF signal on a non-Go thread,
		it tries to collect a traceback into sigprofCallers.
		sigprofCallersUse is set to non-zero while sigprofCallers holds a traceback.
	
		sigsetAllExiting is used by sigblock(true) when a thread is
		exiting.
	
		sigsysIgnored is non-zero if we are currently ignoring SIGSYS. See issue #69065.
	  var size_to_class128 [249]uint8	  var size_to_class8 [129]uint8	
		spanSetBlockPool is a global pool of spanSetBlocks.
	
		Global pool of large stack spans.
	  var stackPoisonCopy int // fill stack that should not be accessed with garbage, to detect bad dereferences during copy	
		Global pool of spans that have free stacks.
		Stacks are assigned an order according to size.
		
			order = log_2(size/FixedStack)
		
		There is a free list for each order.
	
		startingStackSize is the amount of stack that new goroutines start with.
		It is a power of 2, and between fixedStack and maxstacksize, inclusive.
		startingStackSize is updated every GC by tracking the average size of
		stacks scanned during the GC.
	
		OS-specific startup can set startupRand if the OS passes
		random data to the process at startup time.
		For example Linux passes 16 bytes in the auxv vector.
	
		staticuint64s is used to avoid allocating in convTx for small integer values.
		staticuint64s[0] == 0, staticuint64s[1] == 1, and so forth.
		It is defined in assembler code so that it is read-only.
	
		Temporary variable for stopTheWorld, when it can't write to the stack.
		
		Protected by worldsema.
	
		If you add to this list, also add it to src/internal/trace/parser.go.
		If you change the values of any of the stw* constants, bump the trace
		version number and make a copy of this.
	
		TODO: These should be locals in testAtomic64, but we don't 8-byte
		align stack variables on 386.
	
		TODO: These should be locals in testAtomic64, but we don't 8-byte
		align stack variables on 386.
	
		testSigtrap and testSigusr1 are used by the runtime tests. If
		non-nil, it is called on SIGTRAP/SIGUSR1. If it returns true, the
		normal behavior on this signal is suppressed.
	  var testSigusr1 (gp *g) bool	
		Bitmask of Ps that may have a timer, one bit per P. Reads and writes
		must be atomic. Length may change at safe points.
		
		Ideally, the timer mask would be kept immediately consistent on any timer
		operations. Unfortunately, updating a shared global data structure in the
		timer hot path adds too much overhead in applications frequently switching
		between no timers and some timers.
		
		As a compromise, the timer mask is updated only on pidleget / pidleput. A
		running P (returned by pidleget) may add a timer at any time, so its mask
		must be set. An idle P (passed to pidleput) cannot add new timers while
		idle, so if it has no timers at that time, its mask may be cleared.
		
		Thus, we get the following effects on timer-stealing in findrunnable:
		
		  - Idle Ps with no timers when they go idle are never checked in findrunnable
		    (for work- or timer-stealing; this is the ideal case).
		  - Running Ps must always be checked.
		  - Idle Ps whose timers are stolen must continue to be checked until they run
		    again, even after timer expiration.
		
		When the P starts running again, the mask should be set, as a timer may be
		added at any time.
		
		TODO(prattmic): Additional targeted updates may improve the above cases.
		e.g., updating the mask when stealing a timer.
	
		trace is global tracing context.
	
		Trace advancer goroutine.
	  var typecache [256]typeCacheBucket	  var uniqueMapCleanup chan struct{} // for unique	
		runtime variable to check if the processor we're running on
		actually supports the instructions used by the AES-based
		hash implementation.
	
		If useCheckmark is true, marking of an object uses the checkmark
		bits instead of the standard mark bits.
	  var userArenaState struct{lock mutex; reuse []liveUserArenaChunk; fault []liveUserArenaChunk}	  var vgetrandomAlloc struct{states []uintptr; statesLock mutex; stateSize uintptr; mmapProt int32; mmapFlags int32}	
		Holding worldsema grants an M the right to try to stop the world.
	
		The compiler knows about this variable.
		If you change it, you must change builtin/runtime.go, too.
		If you change the first four bytes, you must also change the write
		barrier insertion code.
		
		writeBarrier should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/bytedance/sonic
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
	
		Set in runtime.cpuinit.
		TODO: deprecate these; use internal/cpu directly.
	  var xbuckets atomic.UnsafePointer // *bucket, mutex profile buckets	
		base address for all 0-byte allocations
	
		zeroVal is used by reflect via linkname.
		
		zeroVal should be an internal detail,
		but widely used packages access it using linkname.
		Notable members of the hall of shame include:
		  - github.com/ugorji/go/codec
		
		Do not remove or change the type signature.
		See go.dev/issue/67401.
Package-Level Constants (total 843, in which 3 are exported)
	
		Compiler is the name of the compiler toolchain that built the
		running binary. Known toolchains are:
		
			gc      Also known as cmd/compile.
			gccgo   The gccgo front end, part of the GCC compiler suite.
	
		GOARCH is the running program's architecture target:
		one of 386, amd64, arm, s390x, and so on.
	
		GOOS is the running program's operating system target:
		one of darwin, freebsd, linux, and so on.
		To view possible combinations of GOOS and GOARCH, run "go tool dist list".
		
		_64bit = 1 on 64-bit systems, 0 on 32-bit systems
	const _AT_HWCAP2 = 26 // hardware capability bit vector 2	const _AT_PAGESZ = 6 // System physical page size	const _AT_PLATFORM = 15 // string identifying platform	const _AT_RANDOM = 25 // introduced in 2.6.29	const _AT_SECURE = 23 // secure mode boolean	const _AT_SYSINFO_EHDR = 33	const _BUS_ADRALN = 1	const _BUS_ADRERR = 2	const _BUS_OBJERR = 3	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	
		Clone, the Linux rfork.
	const _DT_GNU_HASH = 1879047925 // GNU-style dynamic symbol hash table	const _DT_STRTAB = 5 // Address of string table	const _DT_SYMTAB = 6 // Address of symbol table	const _DT_VERDEF = 1879048188	const _DT_VERSYM = 1879048176	const _EI_NIDENT = 16	
		These values are the same on all known Unix systems.
		If we find a discrepancy some day, we can split them out.
	const _FD_CLOEXEC = 1	const _FinBlockSize = 4096	const _FixAllocChunk = 16384 // Chunk size for FixAlloc	const _FPE_FLTDIV = 3	const _FPE_FLTINV = 7	const _FPE_FLTOVF = 4	const _FPE_FLTRES = 6	const _FPE_FLTSUB = 8	const _FPE_FLTUND = 5	const _FPE_INTDIV = 1	const _FPE_INTOVF = 2	const _FUTEX_PRIVATE_FLAG = 128	const _FUTEX_WAIT_PRIVATE = 128	const _FUTEX_WAKE_PRIVATE = 129	const _GCmarktermination = 2 // GC mark termination: allocate black, P's help GC, write barrier ENABLED	
		_Gcopystack means this goroutine's stack is being moved. It
		is not executing user code and is not on a run queue. The
		stack is owned by the goroutine that put it in _Gcopystack.
	
		_Gdead means this goroutine is currently unused. It may be
		just exited, on a free list, or just being initialized. It
		is not executing user code. It may or may not have a stack
		allocated. The G and its stack (if any) are owned by the M
		that is exiting the G or that obtained the G from the free
		list.
	
		_Genqueue_unused is currently unused.
	
		_Gidle means this goroutine was just allocated and has not
		yet been initialized.
	
		_Gmoribund_unused is currently unused, but hardcoded in gdb
		scripts.
	
		Number of goroutine ids to grab from sched.goidgen to local per-P cache at once.
		16 seems to provide enough amortization, but other than that it's mostly arbitrary number.
	
		_Gpreempted means this goroutine stopped itself for a
		suspendG preemption. It is like _Gwaiting, but nothing is
		yet responsible for ready()ing it. Some suspendG must CAS
		the status to _Gwaiting to take responsibility for
		ready()ing this G.
	
		_Grunnable means this goroutine is on a run queue. It is
		not currently executing user code. The stack is not owned.
	
		_Grunning means this goroutine may execute user code. The
		stack is owned by this goroutine. It is not on a run queue.
		It is assigned an M and a P (g.m and g.m.p are valid).
	
		_Gscan combined with one of the above states other than
		_Grunning indicates that GC is scanning the stack. The
		goroutine is not executing user code and the stack is owned
		by the goroutine that set the _Gscan bit.
		
		_Gscanrunning is different: it is used to briefly block
		state transitions while GC signals the G to scan its own
		stack. This is otherwise like _Grunning.
		
		atomicstatus&~Gscan gives the state the goroutine will
		return to when the scan completes.
	
		defined constants
	
		defined constants
	
		defined constants
	
		defined constants
	
		defined constants
	
		_Gsyscall means this goroutine is executing a system call.
		It is not executing user code. The stack is owned by this
		goroutine. It is not on a run queue. It is assigned an M.
	
		_Gwaiting means this goroutine is blocked in the runtime.
		It is not executing user code. It is not on a run queue,
		but should be recorded somewhere (e.g., a channel wait
		queue) so it can be ready()d when necessary. The stack is
		not owned *except* that a channel operation may read or
		write parts of the stack under the appropriate channel
		lock. Otherwise, it is not safe to access the stack after a
		goroutine enters _Gwaiting (e.g., it may get moved).
	const _ITIMER_PROF = 2	const _ITIMER_REAL = 0	const _ITIMER_VIRTUAL = 1	
		_KindSpecialCleanup is for tracking cleanups.
	
		_KindSpecialFinalizer is for tracking finalizers.
	
		_KindSpecialPinCounter is a special used for objects that are pinned
		multiple times
	
		_KindSpecialProfile is for memory profiling.
	
		_KindSpecialReachable is a special used for tracking
		reachability during testing.
	
		_KindSpecialWeakHandle is used for creating weak pointers.
	const _MADV_COLLAPSE = 25	const _MADV_DONTNEED = 4	const _MADV_FREE = 8	const _MADV_HUGEPAGE = 14	const _MADV_NOHUGEPAGE = 15	const _MAP_FIXED = 16	const _MAP_PRIVATE = 2	
		Max number of threads to run garbage collection.
		2, 3, and 4 are all plausible maximums depending
		on the hardware details of the machine. The garbage
		collector scales well to 32 cpus.
	const _MaxSmallSize = 32768	const _NumSizeClasses = 68	
		Number of orders that get caching. Order 0 is FixedStack
		and each successive order is twice as large.
		We want to cache 2KB, 4KB, 8KB, and 16KB stacks. Larger stacks
		will be allocated directly.
		Since FixedStack is different on different systems, we
		must vary NumStackOrders to keep the same maximum cached size.
		  OS               | FixedStack | NumStackOrders
		  -----------------+------------+---------------
		  linux/darwin/bsd | 2KB        | 4
		  windows/32       | 4KB        | 3
		  windows/64       | 8KB        | 2
		  plan9            | 4KB        | 3
	const _O_CLOEXEC = 524288	const _O_NONBLOCK = 2048	const _PageShift = 13	
		_Pdead means a P is no longer used (GOMAXPROCS shrank). We
		reuse Ps if GOMAXPROCS increases. A dead P is mostly
		stripped of its resources, though a few things remain
		(e.g., trace buffers).
	
		_Pgcstop means a P is halted for STW and owned by the M
		that stopped the world. The M that stopped the world
		continues to use its P, even in _Pgcstop. Transitioning
		from _Prunning to _Pgcstop causes an M to release its P and
		park.
		
		The P retains its run queue and startTheWorld will restart
		the scheduler on Ps with non-empty run queues.
	
		_Pidle means a P is not being used to run user code or the
		scheduler. Typically, it's on the idle P list and available
		to the scheduler, but it may just be transitioning between
		other states.
		
		The P is owned by the idle list or by whatever is
		transitioning its state. Its run queue is empty.
	const _PROT_EXEC = 4	const _PROT_NONE = 0	const _PROT_READ = 1	const _PROT_WRITE = 2	
		_Prunning means a P is owned by an M and is being used to
		run user code or the scheduler. Only the M that owns this P
		is allowed to change the P's status from _Prunning. The M
		may transition the P to _Pidle (if it has no more work to
		do), _Psyscall (when entering a syscall), or _Pgcstop (to
		halt for the GC). The M may also hand ownership of the P
		off directly to another M (e.g., to schedule a locked G).
	
		_Psyscall means a P is not running user code. It has
		affinity to an M in a syscall but is not owned by it and
		may be stolen by another M. This is similar to _Pidle but
		uses lightweight transitions and maintains M affinity.
		
		Leaving _Psyscall must be done with a CAS, either to steal
		or retake the P. Note that there's an ABA hazard: even if
		an M successfully CASes its original P back to _Prunning
		after a syscall, it must understand the P may have been
		used by another M in the interim.
	const _PT_DYNAMIC = 2 // Dynamic linking information	const _SA_ONSTACK = 134217728	const _SA_RESTART = 268435456	const _SA_RESTORER = 67108864	const _SA_SIGINFO = 4	const _SEGV_ACCERR = 2	const _SEGV_MAPERR = 1	const _SHN_UNDEF = 0 // Undefined section	const _SHT_DYNSYM = 11 // Dynamic linker symbol table	const _SI_KERNEL = 128	const _si_max_size = 128	const _SIG_BLOCK = 0	const _SIG_SETMASK = 2	const _SIG_UNBLOCK = 1	
		Values for the flags field of a sigTabT.
	const _sigev_max_size = 64	
		Values for the flags field of a sigTabT.
	
		Values for the flags field of a sigTabT.
	
		Values for the flags field of a sigTabT.
	
		Values for the flags field of a sigTabT.
	
		Values for the flags field of a sigTabT.
	
		Values for the flags field of a sigTabT.
	const _SIGSTKFLT = 16	
		Values for the flags field of a sigTabT.
	
		Values for the flags field of a sigTabT.
	const _SIGVTALRM = 26	const _SOCK_DGRAM = 2	const _SS_DISABLE = 2	
		Per-P, per order stack segment cache size.
	const _STB_GLOBAL = 1 // Global symbol	const _STT_NOTYPE = 0 // Symbol type is not specified	const _SYS_SECCOMP = 1	
		Tiny allocator parameters, see "Tiny allocator" comment in malloc.go.
	const _TinySizeClass int8 = 2	const _VER_FLG_BASE = 1 // Version definition of file itself	const _WorkbufSize = 2048 // in bytes; larger values result in less contention	const active_spin = 4 // referenced in proc.go for sync.Mutex implementation	const active_spin_cnt = 30 // referenced in proc.go for sync.Mutex implementation	
		addrBits is the number of bits needed to represent a virtual address.
		
		See heapAddrBits for a table of address space sizes on
		various architectures. 48 bits is enough for all
		architectures except s390x.
		
		On AMD64, virtual addresses are 48-bit (or 57-bit) numbers sign extended to 64.
		We shift the address left 16 to eliminate the sign extended part and make
		room in the bottom for the count.
		
		On s390x, virtual addresses are 64-bit. There's not much we
		can do about this, so we just hope that the kernel doesn't
		get to really high addresses and panic if it does.
	
		On AIX, 64-bit addresses are split into 36-bit segment number and 28-bit
		offset in segment.  Segment numbers in the range 0x0A0000000-0x0AFFFFFFF(LSA)
		are available for mmap.
		We assume all tagged addresses are from memory allocated with mmap.
		We use one bit to distinguish between the two ranges.
	const aixTagBits = 10	
		arenaBaseOffset is the pointer value that corresponds to
		index 0 in the heap arena map.
		
		On amd64, the address space is 48 bits, sign extended to 64
		bits. This offset lets us handle "negative" addresses (or
		high addresses if viewed as unsigned).
		
		On aix/ppc64, this offset allows to keep the heapAddrBits to
		48. Otherwise, it would be 60 in order to handle mmap addresses
		(in range 0x0a00000000000000 - 0x0afffffffffffff). But in this
		case, the memory reserved in (s *pageAlloc).init for chunks
		is causing important slowdowns.
		
		On other platforms, the user address space is contiguous
		and starts at 0, so no offset is necessary.
	
		A typed version of this constant that will make it into DWARF (for viewcore).
	
		arenaBits is the total bits in a combined arena map index.
		This is split between the index into the L1 arena map and
		the L2 arena map.
	
		arenaL1Bits is the number of bits of the arena number
		covered by the first level arena map.
		
		This number should be small, since the first level arena
		map requires PtrSize*(1<<arenaL1Bits) of space in the
		binary's BSS. It can be zero, in which case the first level
		index is effectively unused. There is a performance benefit
		to this, since the generated code can be more efficient,
		but comes at the cost of having a large L2 mapping.
		
		We use the L1 map on 64-bit Windows because the arena size
		is small, but the address space is still 48 bits, and
		there's a high cost to having a large L2.
	
		arenaL1Shift is the number of bits to shift an arena frame
		number by to compute an index into the first level arena map.
	
		arenaL2Bits is the number of bits of the arena number
		covered by the second level arena index.
		
		The size of each arena map allocation is proportional to
		1<<arenaL2Bits, so it's important that this not be too
		large. 48 bits leads to 32MB arena index allocations, which
		is about the practical threshold.
	const asanenabled = false	
		avxSupported indicates that the CPU supports AVX instructions.
	const boundsConvert boundsErrorCode = 8 // (*[x]T)(s), 0 <= x <= len(s) failed	const boundsIndex boundsErrorCode = 0 // s[x], 0 <= x < len(s) failed	const boundsSlice3Acap boundsErrorCode = 5 // s[?:?:x], 0 <= x <= cap(s) failed	const boundsSlice3Alen boundsErrorCode = 4 // s[?:?:x], 0 <= x <= len(s) failed	const boundsSlice3B boundsErrorCode = 6 // s[?:x:y], 0 <= x <= y failed (but boundsSlice3A didn't happen)	const boundsSlice3C boundsErrorCode = 7 // s[x:y:?], 0 <= x <= y failed (but boundsSlice3A/B didn't happen)	const boundsSliceAcap boundsErrorCode = 2 // s[?:x], 0 <= x <= cap(s) failed	const boundsSliceAlen boundsErrorCode = 1 // s[?:x], 0 <= x <= len(s) failed	const boundsSliceB boundsErrorCode = 3 // s[x:y], 0 <= x <= y failed (but boundsSliceA didn't happen)	
		size of bucket hash table
	
		buffer of pending write data
	const canCreateFile = true	
		capacityPerProc is the limiter's bucket capacity for each P in GOMAXPROCS.
	const cgoCheckPointerFail = "cgo argument has Go pointer to unpinned Go pointer"	const cgoResultFail = "cgo result is unpinned Go pointer or points to unpinned Go pointer"	const cgoWriteBarrierFail = "unpinned Go pointer stored into non-Go memory"	
		clobberdeadPtr is a special value that is used by the compiler to
		clobber dead stack slots, when -clobberdead flag is set.
	
		Clone, the Linux rfork.
	
		concurrentSweep is a debug flag. Disabling this flag
		ensures all spans are swept while the world is stopped.
	const cpuStatsDep statDep = 2 // corresponds to cpuStatsAggregate	
		Disable crash stack on Windows for now. Apparently, throwing an exception
		on a non-system-allocated crash stack causes EXCEPTION_STACK_OVERFLOW and
		hangs the process (see issue 63938).
	const debugCallRuntime = "call from within the Go runtime"	const debugCallSystemStack = "executing on Go runtime stack"	const debugCallUnknownFunc = "call from unknown function"	const debugCallUnsafePoint = "call not at safe point"	
		check the BP links during traceback.
	
		debugLogBytes is the size of each per-M ring buffer. This is
		allocated off-heap to avoid blowing up the M and hence the GC'd
		heap size.
	
		debugLogHeaderSize is the number of bytes in the framing
		header of every dlog record.
	const debugLogHex = 6	const debugLogInt = 4	const debugLogPC = 11	const debugLogPtr = 7	const debugLogString = 8	
		debugLogStringLimit is the maximum number of bytes in a string.
		Above this, the string will be truncated with "..(n more bytes).."
	
		debugLogSyncSize is the number of bytes in a sync record.
	const debugLogTraceback = 12	const debugLogUint = 5	const debugLogUnknown = 1	
		debugScanConservative enables debug logging for stack
		frames that are scanned conservatively.
	const debugSelect = false	
		debugTraceReentrancy checks if the trace is reentrant.
		
		This is optional because throwing in a function makes it instantly
		not inlineable, and we want traceAcquire to be inlineable for
		low overhead when the trace is disabled.
	
		defaultHeapMinimum is the value of heapMinimum for GOGC==100.
	
		traceAdvancePeriod is the approximate period between
		new generations.
	const dlogEnabled = false	const doubleCheckHeapSetType = false	
		doubleCheckMalloc enables a bunch of extra checks to malloc to double-check
		that various invariants are upheld.
		
		We might consider turning these on by default; many of them previously were.
		They account for a few % of mallocgc's cost though, which does matter somewhat
		at scale.
	
		drainCheckThreshold specifies how many units of work to do
		between self-preemption checks in gcDrain. Assuming a scan
		rate of 1 MB/ms, this is ~100 µs. Lower values have higher
		overhead in the scan loop (the scheduler check may perform
		a syscall, so its overhead is nontrivial). Higher values
		make the system less responsive to incoming work.
	
		These errors are reported (via writeErrStr) by some OS-specific
		versions of newosproc and newosproc0.
	
		These errors are reported (via writeErrStr) by some OS-specific
		versions of newosproc and newosproc0.
	const fastlogNumBits = 5	const fieldKindEface = 3	const fieldKindEol = 0	const fieldKindIface = 2	const fieldKindPtr = 1	
		finalizer goroutine status.
	
		finalizer goroutine status.
	
		finalizer goroutine status.
	
		finalizer goroutine status.
	
		finalizer goroutine status.
	const fixedRootCount = 2	const fixedStack = 2048	
		The minimum stack size to allocate.
		The hackery here rounds fixedStack0 up to a power of 2.
	const fixedStack1 = 2047	const fixedStack2 = 2047	const fixedStack3 = 2047	const fixedStack4 = 2047	const fixedStack5 = 2047	const fixedStack6 = 2047	
		forcePreemptNS is the time slice given to a G before it is
		preempted.
	
		Must agree with internal/buildcfg.FramePointerEnabled.
	const freeChunkSum pallocSum = 2251800887427584	
		Values for m.freeWait.
	
		Values for m.freeWait.
	
		Values for m.freeWait.
	
		freezeStopWait is a large value that freezetheworld sets
		sched.stopwait to in order to request that all Gs permanently stop.
	
		gcAssistTimeSlack is the nanoseconds of mutator assist time that
		can accumulate on a P before updating gcController.assistTime.
	const gcBackgroundMode gcMode = 0 // concurrent GC and sweep	
		gcBackgroundUtilization is the fixed CPU utilization for background
		marking. It must be <= gcGoalUtilization. The difference between
		gcGoalUtilization and gcBackgroundUtilization will be made up by
		mark assists. The scheduler will aim to use within 50% of this
		goal.
		
		As a general rule, there's little reason to set gcBackgroundUtilization
		< gcGoalUtilization. One reason might be in mostly idle applications,
		where goroutines are unlikely to assist at all, so the actual
		utilization will be lower than the goal. But this is moot point
		because the idle mark workers already soak up idle CPU resources.
		These two values are still kept separate however because they are
		distinct conceptually, and in previous iterations of the pacer the
		distinction was more important.
	const gcBitsChunkBytes uintptr = 65536	
		gcCPULimiterUpdatePeriod dictates the maximum amount of wall-clock time
		we can go before updating the limiter.
	
		gcCreditSlack is the amount of scan work credit that can
		accumulate locally before updating gcController.heapScanWork and,
		optionally, gcController.bgScanCredit. Lower values give a more
		accurate assist ratio and make it more likely that assists will
		successfully steal background credit. Higher values reduce memory
		contention.
	const gcForceBlockMode gcMode = 2 // stop-the-world GC now and STW sweep (forced by user)	const gcForceMode gcMode = 1 // stop-the-world GC now, concurrent sweep	
		gcGoalUtilization is the goal CPU utilization for
		marking as a fraction of GOMAXPROCS.
		
		Increasing the goal utilization will shorten GC cycles as the GC
		has more resources behind it, lessening costs from the write barrier,
		but comes at the cost of increasing mutator latency.
	
		gcMarkWorkerDedicatedMode indicates that the P of a mark
		worker is dedicated to running that mark worker. The mark
		worker should run without preemption.
	
		gcMarkWorkerFractionalMode indicates that a P is currently
		running the "fractional" mark worker. The fractional worker
		is necessary when GOMAXPROCS*gcBackgroundUtilization is not
		an integer and using only dedicated workers would result in
		utilization too far from the target of gcBackgroundUtilization.
		The fractional worker should run until it is preempted and
		will be scheduled to pick up the fractional part of
		GOMAXPROCS*gcBackgroundUtilization.
	
		gcMarkWorkerIdleMode indicates that a P is running the mark
		worker because it has nothing else to do. The idle worker
		should run until it is preempted and account its time
		against gcController.idleMarkTime.
	
		gcMarkWorkerNotWorker indicates that the next scheduled G is not
		starting work and the mode should be ignored.
	
		gcOverAssistWork determines how many extra units of scan work a GC
		assist does when an assist happens. This amortizes the cost of an
		assist by pre-paying for this many bytes of future allocations.
	const gcStatsDep statDep = 3 // corresponds to gcStatsAggregate	
		gcTriggerCycle indicates that a cycle should be started if
		we have not yet started cycle number gcTrigger.n (relative
		to work.cycles).
	
		gcTriggerHeap indicates that a cycle should be started when
		the heap size reaches the trigger heap size computed by the
		controller.
	
		gcTriggerTime indicates that a cycle should be started when
		it's been more than forcegcperiod nanoseconds since the
		previous GC cycle.
	
		gTrackingPeriod is the number of transitions out of _Grunning between
		latency tracking runs.
	
		exported value for testing
	const hashRandomBytes = 128	
		haveSysmon indicates whether there is sysmon thread support.
		
		No threads on wasm yet, so no sysmon.
	
		heapAddrBits is the number of bits in a heap address. On
		amd64, addresses are sign-extended beyond heapAddrBits. On
		other arches, they are zero-extended.
		
		On most 64-bit platforms, we limit this to 48 bits based on a
		combination of hardware and OS limitations.
		
		amd64 hardware limits addresses to 48 bits, sign-extended
		to 64 bits. Addresses where the top 16 bits are not either
		all 0 or all 1 are "non-canonical" and invalid. Because of
		these "negative" addresses, we offset addresses by 1<<47
		(arenaBaseOffset) on amd64 before computing indexes into
		the heap arenas index. In 2017, amd64 hardware added
		support for 57 bit addresses; however, currently only Linux
		supports this extension and the kernel will never choose an
		address above 1<<47 unless mmap is called with a hint
		address above 1<<47 (which we never do).
		
		arm64 hardware (as of ARMv8) limits user addresses to 48
		bits, in the range [0, 1<<48).
		
		ppc64, mips64, and s390x support arbitrary 64 bit addresses
		in hardware. On Linux, Go leans on stricter OS limits. Based
		on Linux's processor.h, the user address space is limited as
		follows on 64-bit architectures:
		
		Architecture  Name              Maximum Value (exclusive)
		---------------------------------------------------------------------
		amd64         TASK_SIZE_MAX     0x007ffffffff000 (47 bit addresses)
		arm64         TASK_SIZE_64      0x01000000000000 (48 bit addresses)
		ppc64{,le}    TASK_SIZE_USER64  0x00400000000000 (46 bit addresses)
		mips64{,le}   TASK_SIZE64       0x00010000000000 (40 bit addresses)
		s390x         TASK_SIZE         1<<64 (64 bit addresses)
		
		These limits may increase over time, but are currently at
		most 48 bits except on s390x. On all architectures, Linux
		starts placing mmap'd regions at addresses that are
		significantly below 48 bits, so even if it's possible to
		exceed Go's 48 bit limit, it's extremely unlikely in
		practice.
		
		On 32-bit platforms, we accept the full 32-bit address
		space because doing so is cheap.
		mips32 only has access to the low 2GB of virtual memory, so
		we further limit it to 31 bits.
		
		On ios/arm64, although 64-bit pointers are presumably
		available, pointers are truncated to 33 bits in iOS <14.
		Furthermore, only the top 4 GiB of the address space are
		actually available to the application. In iOS >=14, more
		of the address space is available, and the OS can now
		provide addresses outside of those 33 bits. Pick 40 bits
		as a reasonable balance between address space usage by the
		page allocator, and flexibility for what mmap'd regions
		we'll accept for the heap. We can't just move to the full
		48 bits because this uses too much address space for older
		iOS versions.
		TODO(mknyszek): Once iOS <14 is deprecated, promote ios/arm64
		to a 48-bit address space like every other arm64 platform.
		
		WebAssembly currently has a limit of 4GB linear memory.
	
		heapArenaBitmapWords is the size of each heap arena's bitmap in uintptrs.
	
		heapArenaBytes is the size of a heap arena. The heap
		consists of mappings of size heapArenaBytes, aligned to
		heapArenaBytes. The initial heap mapping is one arena.
		
		This is currently 64MB on 64-bit non-Windows and 4MB on
		32-bit and on Windows. We use smaller arenas on Windows
		because all committed memory is charged to the process,
		even if it's not touched. Hence, for processes with small
		heaps, the mapped arena space needs to be commensurate.
		This is particularly important with the race detector,
		since it significantly amplifies the cost of committed
		memory.
	const heapArenaWords = 8388608	const heapStatsDep statDep = 0 // corresponds to heapStatsAggregate	const isSbrkPlatform = false	const itabInitSize = 512	const largeSizeDiv = 128	const limiterEventIdle limiterEventType = 4 // Refers to time a P spent on the idle list.	const limiterEventIdleMarkWork limiterEventType = 1 // Refers to an idle mark worker (see gcMarkWorkerMode).	const limiterEventMarkAssist limiterEventType = 2 // Refers to mark assist (see gcAssistAlloc).	const limiterEventNone limiterEventType = 0 // None of the following events.	const limiterEventScavengeAssist limiterEventType = 3 // Refers to a scavenge assist (see allocSpan).	
		limiterEventTypeMask is a mask for the bits in p.limiterEventStart that represent
		the event type. The rest of the bits of that field represent a timestamp.
	
		limiterEventTypeMask is a mask for the bits in p.limiterEventStart that represent
		the event type. The rest of the bits of that field represent a timestamp.
	const loadFactorDen = 8	
		TODO: remove? These are used by tests but not the actual map
	
		The default lowest and highest continuation byte.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		SCHED
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		MALLOC
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		MPROF
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		STACKGROW
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		lockRankLeafRank is the rank of lock that does not have a declared rank,
		and hence is a leaf lock.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		TRACE
	
		TRACEGLOBAL
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		Constants representing the ranks of all non-leaf runtime locks, in rank order.
		Locks with lower rank must be taken before locks with higher rank,
		in addition to satisfying the partial order in lockPartialOrder.
		A few ranks allow self-cycles, which are specified in lockPartialOrder.
	
		WB
	
		logHeapArenaBytes is log_2 of heapArenaBytes. For clarity,
		prefer using heapArenaBytes where possible (we need the
		constant to compute some other constants).
	
		logicalStackSentinel is a sentinel value at pcBuf[0] signifying that
		pcBuf[1:] holds a logical stack requiring no further processing. Any other
		value at pcBuf[0] represents a skip value to apply to the physical stack in
		pcBuf[1:] after inline expansion.
	const logMaxPackedValue = 21	const logPallocChunkBytes = 22	
		logScavChunkInUseMax is the number of bits needed to represent the number
		of pages allocated in a single chunk. This is 1 more than log2 of the
		number of pages in the chunk because we need to represent a fully-allocated
		chunk.
	
		A malloc header is functionally a single type pointer, but
		we need to use 8 here to ensure 8-byte alignment of allocations
		on 32-bit platforms. It's wasteful, but a lot of code relies on
		8-byte alignment for 8-byte atomics.
	const mantbits32 uint = 23	const mantbits64 uint = 52	
		maxAlloc is the maximum size of an allocation. On 64-bit,
		it's theoretically possible to allocate 1<<heapAddrBits bytes. On
		32-bit, however, this is one less than 1<<32 because the
		number of bytes in the address space doesn't actually fit
		in a uintptr.
	const maxCPUProfStack = 64	const maxObjsPerSpan = 1024	
		maxObletBytes is the maximum bytes of an object to scan at
		once. Larger objects will be split up into "oblets" of at
		most this size. Since we can scan 1–2 MB/ms, 128 KB bounds
		scan preemption at ~100 µs.
		
		This must be > _MaxSmallSize so that the object base is the
		span base.
	
		maxPackedValue is the maximum value that any of the three fields in
		the pallocSum may take on.
	
		maxPagesPerPhysPage is the maximum number of supported runtime pages per
		physical page, based on maxPhysPageSize.
	
		maxPhysHugePageSize sets an upper-bound on the maximum huge page size
		that the runtime supports.
	
		maxPhysPageSize is the maximum page size the runtime supports.
	
		maxProfStackDepth is the highest valid value for debug.profstackdepth.
		It's used for the bucket.stk func.
		TODO(fg): can we get rid of this?
	
		Numbers fundamental to the encoding.
	
		maxSkip is to account for deferred inline expansion
		when using frame pointer unwinding. We record the stack
		with "physical" frame pointers but handle skipping "logical"
		frames at some point after collecting the stack. So
		we need extra space in order to avoid getting fewer than the
		desired maximum number of frames after expansion.
		This should be at least as large as the largest skip value
		used for profiling; otherwise stacks may be truncated inconsistently
	const maxSmallSize = 32768	
		maxStackScanSlack is the bytes of stack space allocated or freed
		that can accumulate on a P before updating gcController.stackSize.
	const maxTinySize = 16	const maxTraceStringLen = 1024	
		The maximum trigger constant is chosen somewhat arbitrarily, but the
		current constant has served us well over the years.
	
		maxWhen is the maximum value for timer's when field.
	
		memoryLimitHeapGoalHeadroomPercent is how headroom the memory-limit-based
		heap goal should have as a percent of the maximum possible heap goal allowed
		to maintain the memory limit.
	
		memoryLimitMinHeapGoalHeadroom is the minimum amount of headroom the
		pacer gives to the heap goal when operating in the memory-limited regime.
		That is, it'll reduce the heap goal by this many extra bytes off of the
		base calculation, at minimum.
	
		profile types
	
		These values must be kept identical to their corresponding Kind* values
		in the runtime/metrics package.
	const minHeapAlign = 8	
		minHeapForMetadataHugePages sets a threshold on when certain kinds of
		heap metadata, currently the arenas map L2 entries and page alloc bitmap
		mappings, are allowed to be backed by huge pages. If the heap goal ever
		exceeds this threshold, then huge pages are enabled.
		
		These numbers are chosen with the assumption that huge pages are on the
		order of a few MiB in size.
		
		The kind of metadata this applies to has a very low overhead when compared
		to address space used, but their constant overheads for small heaps would
		be very high if they were to be backed by huge pages (e.g. a few MiB makes
		a huge difference for an 8 MiB heap, but barely any difference for a 1 GiB
		heap). The benefit of huge pages is also not worth it for small heaps,
		because only a very, very small part of the metadata is used for small heaps.
		
		N.B. If the heap goal exceeds the threshold then shrinks to a very small size
		again, then huge pages will still be enabled for this mapping. The reason is that
		there's no point unless we're also returning the physical memory for these
		metadata mappings back to the OS. That would be quite complex to do in general
		as the heap is likely fragmented after a reduction in heap size.
	
		minLegalPointer is the smallest possible legal pointer.
		This is the smallest possible architectural page size,
		since we assume that the first page is never mapped.
		
		This should agree with minZeroPage in the compiler.
	
		minPhysPageSize is a lower-bound on the physical page size. The
		true physical page size may be larger than this. In contrast,
		sys.PhysPageSize is an upper-bound on the physical page size.
	
		Spend at least 1 ms scavenging, otherwise the corresponding
		sleep time to maintain our desired utilization is too low to
		be reliable.
	
		The minimum object size that has a malloc header, exclusive.
		
		The size of this value controls overheads from the malloc header.
		The minimum size is bound by writeHeapBitsSmall, which assumes that the
		pointer bitmap for objects of a size smaller than this doesn't cross
		more than one pointer-word boundary. This sets an upper-bound on this
		value at the number of bits in a uintptr, multiplied by the pointer
		size in bytes.
		
		We choose a value here that has a natural cutover point in terms of memory
		overheads. This value just happens to be the maximum possible value this
		can be.
		
		A span with heap bits in it will have 128 bytes of heap bits on 64-bit
		platforms, and 256 bytes of heap bits on 32-bit platforms. The first size
		class where malloc headers match this overhead for 64-bit platforms is
		512 bytes (8 KiB / 512 bytes * 8 bytes-per-header = 128 bytes of overhead).
		On 32-bit platforms, this same point is the 256 byte size class
		(8 KiB / 256 bytes * 8 bytes-per-header = 256 bytes of overhead).
		
		Guaranteed to be exactly at a size class boundary. The reason this value is
		an exclusive minimum is subtle. Suppose we're allocating a 504-byte object
		and its rounded up to 512 bytes for the size class. If minSizeForMallocHeader
		is 512 and an inclusive minimum, then a comparison against minSizeForMallocHeader
		by the two values would produce different results. In other words, the comparison
		would not be invariant to size-class rounding. Eschewing this property means a
		more complex check or possibly storing additional state to determine whether a
		span has malloc headers.
	
		minTagBits is the minimum number of tag bits that we expect.
	
		minTimeForTicksPerSecond is the minimum elapsed time we require to consider our ticksPerSecond
		measurement to be of decent enough quality for profiling.
		
		There's a linear relationship here between minimum time and error from the true value.
		The error from the true ticks-per-second in a linux/amd64 VM seems to be:
		-   1 ms -> ~0.02% error
		-   5 ms -> ~0.004% error
		-  10 ms -> ~0.002% error
		-  50 ms -> ~0.0003% error
		- 100 ms -> ~0.0001% error
		
		We're willing to take 0.004% error here, because ticksPerSecond is intended to be used for
		converting durations, not timestamps. Durations are usually going to be much larger, and so
		the tiny error doesn't matter. The error is definitely going to be a problem when trying to
		use this for timestamps, as it'll make those timestamps much less likely to line up.
	
		The minimum trigger constant was chosen empirically: given a sufficiently
		fast/scalable allocator with 48 Ps that could drive the trigger ratio
		to <0.05, this constant causes applications to retain the same peak
		RSS compared to not having this allocator.
	const mProfCycleWrap uint32 = 100663296	const msanenabled = false	const mSpanDead mSpanState = 0	const mSpanInUse mSpanState = 1 // allocated for garbage collected heap	const mSpanManual mSpanState = 2 // allocated for manual management (e.g., stack allocator)	const mutexActiveSpinSize = 30	const mutexLocked = 1	const mutexMMask = 1023	const mutexMOffset = 8 // alignment of heap-allocated Ms (those other than m0)	const mutexSleeping = 2	const mutexSpinning = 256	const mutexStackLocked = 512	const mutexTailWakePeriod = 16	const numSpanClasses = 136	const numStatsDeps statDep = 4	const numSweepClasses = 272	
		Offsets into internal/cpu records for use in assembly.
	
		Offsets into internal/cpu records for use in assembly.
	
		Offsets into internal/cpu records for use in assembly.
	
		Offsets into internal/cpu records for use in assembly.
	
		Offsets into internal/cpu records for use in assembly.
	
		Offsets into internal/cpu records for use in assembly.
	
		Offsets into internal/cpu records for use in assembly.
	
		osHasLowResClock indicates that timestamps produced by nanotime on the platform have a
		low resolution, typically on the order of 1 ms or more.
	
		osHasLowResClockInt is osHasLowResClock but in integer form, so it can be used to create
		constants conditionally.
	
		osHasLowResTimer indicates that the platform's internal timer system has a low resolution,
		typically on the order of 1 ms or more.
	
		osRelaxMinNS is the number of nanoseconds of idleness to tolerate
		without performing an osRelax. Since osRelax may reduce the
		precision of timers, this should be enough larger than the relaxed
		timer precision to keep the timer error acceptable.
	
		Constants for testing.
	const pageAlloc64Bit = 1	const pageCachePages uintptr = 64	const pagesPerArena = 8192	
		pagesPerReclaimerChunk indicates how many pages to scan from the
		pageInUse bitmap at a time. Used by the page reclaimer.
		
		Higher values reduce contention on scanning indexes (such as
		h.reclaimIndex), but increase the minimum latency of the
		operation.
		
		The time required to scan this many pages can vary a lot depending
		on how many spans are actually freed. Experimentally, it can
		scan for pages at ~300 GB/ms on a 2.6GHz Core i7, but can only
		free spans at ~32 MB/ms. Using 512 pages bounds this at
		roughly 100µs.
		
		Must be a multiple of the pageInUse bitmap element size and
		must also evenly divide pagesPerArena.
	
		pagesPerSpanRoot indicates how many pages to scan from a span root
		at a time. Used by special root marking.
		
		Higher values improve throughput by increasing locality, but
		increase the minimum latency of a marking operation.
		
		Must be a multiple of the pageInUse bitmap element size and
		must also evenly divide pagesPerArena.
	const pallocChunkBytes = 4194304	
		The size of a bitmap chunk, i.e. the amount of bits (that is, pages) to consider
		in the bitmap at once.
	
		Number of bits needed to represent all indices into the L1 of the
		chunks map.
		
		See (*pageAlloc).chunks for more details. Update the documentation
		there should this number change.
	const pallocChunksL1Shift = 13	
		pallocChunksL2Bits is the number of bits of the chunk index number
		covered by the second level of the chunks map.
		
		See (*pageAlloc).chunks for more details. Update the documentation
		there should this change.
	
		pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer
		goroutines respectively. The semaphore can be in the following states:
		
			pdReady - io readiness notification is pending;
			          a goroutine consumes the notification by changing the state to pdNil.
			pdWait - a goroutine prepares to park on the semaphore, but not yet parked;
			         the goroutine commits to park by changing the state to G pointer,
			         or, alternatively, concurrent io notification changes the state to pdReady,
			         or, alternatively, concurrent timeout/close changes the state to pdNil.
			G pointer - the goroutine is blocked on the semaphore;
			            io notification or timeout/close changes the state to pdReady or pdNil respectively
			            and unparks the goroutine.
			pdNil - none of the above.
	
		pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer
		goroutines respectively. The semaphore can be in the following states:
		
			pdReady - io readiness notification is pending;
			          a goroutine consumes the notification by changing the state to pdNil.
			pdWait - a goroutine prepares to park on the semaphore, but not yet parked;
			         the goroutine commits to park by changing the state to G pointer,
			         or, alternatively, concurrent io notification changes the state to pdReady,
			         or, alternatively, concurrent timeout/close changes the state to pdNil.
			G pointer - the goroutine is blocked on the semaphore;
			            io notification or timeout/close changes the state to pdReady or pdNil respectively
			            and unparks the goroutine.
			pdNil - none of the above.
	
		pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer
		goroutines respectively. The semaphore can be in the following states:
		
			pdReady - io readiness notification is pending;
			          a goroutine consumes the notification by changing the state to pdNil.
			pdWait - a goroutine prepares to park on the semaphore, but not yet parked;
			         the goroutine commits to park by changing the state to G pointer,
			         or, alternatively, concurrent io notification changes the state to pdReady,
			         or, alternatively, concurrent timeout/close changes the state to pdNil.
			G pointer - the goroutine is blocked on the semaphore;
			            io notification or timeout/close changes the state to pdReady or pdNil respectively
			            and unparks the goroutine.
			pdNil - none of the above.
	
		persistentChunkSize is the number of bytes we allocate when we grow
		a persistentAlloc.
	
		physPageAlignedStacks indicates whether stack allocations must be
		physical page aligned. This is a requirement for MAP_STACK on
		OpenBSD.
	const pinnerSize = 64	const pollBlockSize = 4096	const pollClosing = 1	
		Error codes returned by runtime_pollReset and runtime_pollWait.
		These must match the values in internal/poll/fd_poll_runtime.go.
	
		Error codes returned by runtime_pollReset and runtime_pollWait.
		These must match the values in internal/poll/fd_poll_runtime.go.
	
		Error codes returned by runtime_pollReset and runtime_pollWait.
		These must match the values in internal/poll/fd_poll_runtime.go.
	const pollEventErr = 2	const pollFDSeqBits = 20 // number of bits in pollFDSeq	const pollFDSeqMask = 1048575 // mask for pollFDSeq	
		Error codes returned by runtime_pollReset and runtime_pollWait.
		These must match the values in internal/poll/fd_poll_runtime.go.
	const preemptMSupported = true	
		profBufTagCount is the size of the CPU profile buffer's storage for the
		goroutine tags associated with each sample. A capacity of 1<<14 means
		room for 16k samples, or 160 thread-seconds at a 100 Hz sample rate.
	
		profBufWordCount is the size of the CPU profile buffer's storage for the
		header and stack of each sample, measured in 64-bit words. Every sample
		has a required header of two words. With a small additional header (a
		word or two) and stacks at the profiler's maximum length of 64 frames,
		that capacity can support 1900 samples or 19 thread-seconds at a 100 Hz
		sample rate, at a cost of 1 MiB.
	const profReaderSleeping profIndex = 4294967296 // reader is sleeping and must be woken up	const profWriteExtra profIndex = 8589934592 // overflow or eof waiting	const raceenabled = false	
		To shake out latent assumptions about scheduling order,
		we introduce some randomness into scheduling decisions
		when running with the race detector.
		The need for this was made obvious by changing the
		(deterministic) scheduling order in Go 1.5 and breaking
		many poorly-written tests.
		With the randomness here, as long as the tests pass
		consistently with -race, they shouldn't have latent scheduling
		assumptions.
	
		reduceExtraPercent represents the amount of memory under the limit
		that the scavenger should target. For example, 5 means we target 95%
		of the limit.
		
		The purpose of shooting lower than the limit is to ensure that, once
		close to the limit, the scavenger is working hard to maintain it. If
		we have a memory limit set but are far away from it, there's no harm
		in leaving up to 100-retainExtraPercent live, and it's more efficient
		anyway, for the same reasons that retainExtraPercent exists.
	
		repmovsPreferred indicates that REP MOVSx instruction is more
		efficient on the CPU.
	
		retainExtraPercent represents the amount of memory over the heap goal
		that the scavenger should keep as a buffer space for the allocator.
		This constant is used when we do not have a memory limit set.
		
		The purpose of maintaining this overhead is to have a greater pool of
		unscavenged memory available for allocation (since using scavenged memory
		incurs an additional cost), to account for heap fragmentation and
		the ever-changing layout of the heap.
	
		riscv64 SV57 mode gives 56 bits of userspace VA.
		tagged pointer code supports it,
		but broader support for SV57 mode is incomplete,
		and there may be other issues (see #54104).
	const riscv64TagBits = 11	
		rootBlockBytes is the number of bytes to scan per data or
		BSS root.
	
		Numbers fundamental to the encoding.
	
		Numbers fundamental to the encoding.
	const rwmutexMaxReaders = 1073741824	const scavChunkFlagsMask = 63	
		scavChunkHasFree indicates whether the chunk has anything left to
		scavenge. This is the opposite of "empty," used elsewhere in this
		file. The reason we say "HasFree" here is so the zero value is
		correct for a newly-grown chunk. (New memory is scavenged.)
	
		scavChunkHiOcFrac indicates the fraction of pages that need to be allocated
		in the chunk in a single GC cycle for it to be considered high density.
	const scavChunkHiOccPages uint16 = 496	const scavChunkInUseMask = 1023	
		scavChunkMaxFlags is the maximum number of flags we can have, given how
		a scavChunkData is packed into 8 bytes.
	
		scavengeCostRatio is the approximate ratio between the costs of using previously
		scavenged memory and scavenging memory.
		
		For most systems the cost of scavenging greatly outweighs the costs
		associated with using scavenged memory, making this constant 0. On other systems
		(especially ones where "sysUsed" is not just a no-op) this cost is non-trivial.
		
		This ratio is used as part of multiplicative factor to help the scavenger account
		for the additional costs of using scavenged memory in its pacing.
	
		The background scavenger is paced according to these parameters.
		
		scavengePercent represents the portion of mutator time we're willing
		to spend on scavenging in percent.
	const selectDefault selectDir = 3 // default	const selectRecv selectDir = 2 // case <-Chan:	const selectSend selectDir = 1 // case Chan <- Send	
		Prime to not correlate with any user patterns.
	
		sigPerThreadSyscall is the same signal (SIGSETXID) used by glibc for
		per-thread syscalls on Linux. We use it for the same purpose in non-cgo
		binaries.
	
		sigPreempt is the signal used for non-cooperative preemption.
		
		There's no good way to choose this signal, but there are some
		heuristics:
		
		1. It should be a signal that's passed-through by debuggers by
		default. On Linux, this is SIGALRM, SIGURG, SIGCHLD, SIGIO,
		SIGVTALRM, SIGPROF, and SIGWINCH, plus some glibc-internal signals.
		
		2. It shouldn't be used internally by libc in mixed Go/C binaries
		because libc may assume it's the only thing that can handle these
		signals. For example SIGCANCEL or SIGSETXID.
		
		3. It should be a signal that can happen spuriously without
		consequences. For example, SIGALRM is a bad choice because the
		signal handler can't tell if it was caused by the real process
		alarm or not (arguably this means the signal is broken, but I
		digress). SIGUSR1 and SIGUSR2 are also bad because those are often
		used in meaningful ways by applications.
		
		4. We need to deal with platforms without real-time signals (like
		macOS), so those are out.
		
		We use SIGURG because it meets all of these criteria, is extremely
		unlikely to be used by an application for its "real" meaning (both
		because out-of-band data is basically unused and because SIGURG
		doesn't report which socket has the condition, making it pretty
		useless), and even if it is, the application has to be ready for
		spurious SIGURG. SIGIO wouldn't be a bad choice either, but is more
		likely to be used for real.
	const sigReceiving = 1	const sigSending = 2	const smallSizeDiv = 8	const smallSizeMax = 1024	const spanAllocHeap spanAllocType = 0 // heap span	const spanAllocPtrScalarBits spanAllocType = 2 // unrolled GC prog bitmap span	const spanAllocStack spanAllocType = 1 // stack span	const spanAllocWorkBuf spanAllocType = 3 // work buf span	const spanSetBlockEntries = 512 // 4KB on 64-bit	const spanSetInitSpineCap = 256 // Enough for 1GB heap on 64-bit	
		stackDebug == 0: no logging
		           == 1: logging of per-stack operations
		           == 2: logging of per-frame operations
		           == 3: logging of per-word updates
		           == 4: logging of per-word reads
	const stackFaultOnFree = 0 // old stacks are mapped noaccess to detect use after free	
		Force a stack movement. Used for debugging.
		0xfffffeed in hex.
	
		Thread is forking. Causes a split stack check failure.
		0xfffffb2e in hex.
	const stackFromSystem = 0 // allocate stacks from system memory instead of the heap	
		The stack guard is a pointer this many bytes above the
		bottom of the stack.
		
		The guard leaves enough room for a stackNosplit chain of NOSPLIT calls
		plus one stackSmall frame plus stackSystem bytes for the OS.
		This arithmetic must match that in cmd/internal/objabi/stack.go:StackLimit.
	
		The minimum size of stack used by Go code
	const stackNoCache = 0 // disable per-P small stack caches	
		stackNosplit is the maximum number of bytes that a chain of NOSPLIT
		functions can use.
		This arithmetic must match that in cmd/internal/objabi/stack.go:StackNosplit.
	
		stackPoisonMin is the lowest allowed stack poison value.
	
		Goroutine preemption request.
		0xfffffade in hex.
	
		stackSystem is a number of additional bytes to add
		to each stack below the usual guard area for OS-specific
		purposes like signal handling. Used on Windows, Plan 9,
		and iOS because they do not use a separate stack.
	const stackTraceDebug = false	
		It doesn't really matter what value we start at, but we can't be zero, because
		that'll cause divide-by-zero issues. Pick something conservative which we'll
		also use as a fallback.
	const staticLockRanking = false	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	
		Reasons to stop-the-world.
		
		Avoid reusing reasons and add new ones instead.
	const summaryL0Bits = 14	
		The number of radix bits for each level.
		
		The value of 3 is chosen such that the block of summaries we need to scan at
		each level fits in 64 bytes (2^3 summaries * 8 bytes per summary), which is
		close to the L1 cache line width on many systems. Also, a value of 3 fits 4 tree
		levels perfectly into the 21-bit pallocBits summary field at the root level.
		
		The following equation explains how each of the constants relate:
		summaryL0Bits + (summaryLevels-1)*summaryLevelBits + logPallocChunkBytes = heapAddrBits
		
		summaryLevels is an architecture-dependent value defined in mpagealloc_*.go.
	
		The number of levels in the radix tree.
	
		Code points in the surrogate range are not valid for UTF-8.
	
		Code points in the surrogate range are not valid for UTF-8.
	const sweepClassDone sweepClass = 4294967295	const sweepDrainedMask = 2147483648	
		sweepMinHeapDistance is a lower bound on the heap distance
		(in bytes) reserved for concurrent sweeping between GC
		cycles.
	const sysStatsDep statDep = 1 // corresponds to sysStatsAggregate	const tagAllocSample = 17	
		In addition to the 16 bits taken from the top, we can take 3 from the
		bottom, because node must be pointer-aligned, giving a total of 19 bits
		of count.
	const tagFinalizer = 7	
		The number of bits stored in the numeric tag of a taggedPointer
	const tagGoroutine = 4	const tagMemProf = 16	const tagMemStats = 10	const tagOSThread = 9	const tagOtherRoot = 2	const tagQueuedFinalizer = 11	const tagStackFrame = 5	
		testSmallBuf forces a small write barrier buffer to stress write
		barrier flushing.
	
		throwTypeNone means that we are not throwing.
	
		throwTypeRuntime is a throw due to a problem with Go itself.
		
		These throws include as much information as possible to aid in
		debugging the runtime, including runtime frames, system goroutines,
		and frame metadata.
	
		throwTypeUser is a throw due to a problem with the application.
		
		These throws do not include runtime frames, system goroutines, or
		frame metadata.
	const timeHistMaxBucketBits = 48 // Note that this is exclusive; 1 higher than the actual range.	
		For the time histogram type, we use an HDR histogram.
		Values are placed in buckets based solely on the most
		significant set bit. Thus, buckets are power-of-2 sized.
		Values are then placed into sub-buckets based on the value of
		the next timeHistSubBucketBits most significant bits. Thus,
		sub-buckets are linear within a bucket.
		
		Therefore, the number of sub-buckets (timeHistNumSubBuckets)
		defines the error. This error may be computed as
		1/timeHistNumSubBuckets*100%. For example, for 16 sub-buckets
		per bucket the error is approximately 6%.
		
		The number of buckets (timeHistNumBuckets), on the
		other hand, defines the range. To avoid producing a large number
		of buckets that are close together, especially for small numbers
		(e.g. 1, 2, 3, 4, 5 ns) that aren't very useful, timeHistNumBuckets
		is defined in terms of the least significant bit (timeHistMinBucketBits)
		that needs to be set before we start bucketing and the most
		significant bit (timeHistMaxBucketBits) that we bucket before we just
		dump it into a catch-all bucket.
		
		As an example, consider the configuration:
		
		   timeHistMinBucketBits = 9
		   timeHistMaxBucketBits = 48
		   timeHistSubBucketBits = 2
		
		Then:
		
		   011000001
		   ^--
		   │ ^
		   │ └---- Next 2 bits -> sub-bucket 3
		   └------- Bit 9 unset -> bucket 0
		
		   110000001
		   ^--
		   │ ^
		   │ └---- Next 2 bits -> sub-bucket 2
		   └------- Bit 9 set -> bucket 1
		
		   1000000010
		   ^-- ^
		   │ ^ └-- Lower bits ignored
		   │ └---- Next 2 bits -> sub-bucket 0
		   └------- Bit 10 set -> bucket 2
		
		Following this pattern, bucket 38 will have the bit 46 set. We don't
		have any buckets for higher values, so we spill the rest into an overflow
		bucket containing values of 2^47-1 nanoseconds or approx. 1 day or more.
		This range is more than enough to handle durations produced by the runtime.
	const timeHistNumBuckets = 40	
		Two extra buckets, one for underflow, one for overflow.
	
		timerDebug enables printing a textual debug trace of all timer operations to stderr.
	
		timerHeaped is set when the timer is stored in some P's heap.
	const timerHeapN = 4	
		timerModified is set when t.when has been modified
		but the heap's heap[i].when entry still needs to be updated.
		That change waits until the heap in which
		the timer appears can be locked and rearranged.
		timerModified is only set when timerHeaped is also set.
	
		timerZombie is set when the timer has been stopped
		but is still present in some P's heap.
		Only set when timerHeaped is also set.
		It is possible for timerModified and timerZombie to both
		be set, meaning that the timer was modified and then stopped.
		A timer sending to a channel may be placed in timerZombie
		to take it out of the heap even though the timer is not stopped,
		as long as nothing is reading from the channel.
	const tinySizeClass int8 = 2	
		tlsSlots is the number of pointer-sized slots reserved for TLS on some platforms,
		like Windows.
	
		The constant is known to the compiler.
		There is no fundamental theory behind this number.
	
		Batch type values for the alloc/free experiment.
	
		Batch type values for the alloc/free experiment.
	
		Keep a cached value to make gotraceback fast,
		since we call it on every call to gentraceback.
		The cached value is a uint32 in which the low bits
		are the "crash" and "all" settings and the remaining
		bits are the traceback value (0 off, 1 on, 2 include system).
	
		Keep a cached value to make gotraceback fast,
		since we call it on every call to gentraceback.
		The cached value is a uint32 in which the low bits
		are the "crash" and "all" settings and the remaining
		bits are the traceback value (0 off, 1 on, 2 include system).
	
		tracebackInnerFrames is the number of innermost frames to print in a
		stack trace. The total maximum frames is tracebackInnerFrames +
		tracebackOuterFrames.
	
		tracebackOuterFrames is the number of outermost frames to print in a
		stack trace.
	
		Keep a cached value to make gotraceback fast,
		since we call it on every call to gentraceback.
		The cached value is a uint32 in which the low bits
		are the "crash" and "all" settings and the remaining
		bits are the traceback value (0 off, 1 on, 2 include system).
	
		Maximum number of bytes required to encode uint64 in base-128.
	const traceEvCPUSample traceEv = 7 // CPU profiling sample [timestamp, M ID, P ID, goroutine ID, stack ID]	const traceEvCPUSamples traceEv = 6 // start of a section of CPU samples [...traceEvCPUSample]	
		Structural events.
	
		Batch event for an experimental batch with a custom format.
	const traceEvFrequency traceEv = 8 // timestamp units per sec [freq]	
		GC events.
	const traceEvGCBegin traceEv = 29 // GC start [timestamp, seq, stack ID]	const traceEvGCEnd traceEv = 30 // GC done [timestamp, seq]	const traceEvGCMarkAssistActive traceEv = 34 // GC mark assist active [timestamp, goroutine ID]	const traceEvGCMarkAssistBegin traceEv = 35 // GC mark assist start [timestamp, stack ID]	const traceEvGCMarkAssistEnd traceEv = 36 // GC mark assist done [timestamp]	const traceEvGCSweepActive traceEv = 31 // GC sweep active [timestamp, P ID]	const traceEvGCSweepBegin traceEv = 32 // GC sweep start [timestamp, stack ID]	const traceEvGCSweepEnd traceEv = 33 // GC sweep done [timestamp, swept bytes, reclaimed bytes]	const traceEvGoBlock traceEv = 20 // goroutine blocks [timestamp, reason, stack ID]	
		Goroutines.
	const traceEvGoCreateBlocked traceEv = 47 // goroutine creation (starts blocked) [timestamp, new goroutine ID, new stack ID, stack ID]	const traceEvGoCreateSyscall traceEv = 15 // goroutine appears in syscall (cgo callback) [timestamp, new goroutine ID]	const traceEvGoDestroy traceEv = 17 // goroutine ends [timestamp]	const traceEvGoDestroySyscall traceEv = 18 // goroutine ends in syscall (cgo callback) [timestamp]	
		Annotations.
	
		Experimental goroutine stack events. IDs map reversibly to addresses.
	
		Experimental events.
	
		Experimental events.
	const traceEvGoStart traceEv = 16 // goroutine starts running [timestamp, goroutine ID, goroutine seq]	const traceEvGoStatus traceEv = 25 // goroutine status at the start of a generation [timestamp, goroutine ID, M ID, status]	
		GoStatus with stack.
	const traceEvGoStop traceEv = 19 // goroutine yields its time, but is runnable [timestamp, reason, stack ID]	
		Coroutines.
	const traceEvGoSwitchDestroy traceEv = 46 // goroutine switch and destroy [timestamp, goroutine ID, goroutine seq]	const traceEvGoSyscallBegin traceEv = 22 // syscall enter [timestamp, P seq, stack ID]	const traceEvGoSyscallEnd traceEv = 23 // syscall exit [timestamp]	const traceEvGoSyscallEndBlocked traceEv = 24 // syscall exit and it blocked at some point [timestamp]	const traceEvGoUnblock traceEv = 21 // goroutine is unblocked [timestamp, goroutine ID, goroutine seq, stack ID]	const traceEvHeapAlloc traceEv = 37 // gcController.heapLive change [timestamp, heap alloc in bytes]	const traceEvHeapGoal traceEv = 38 // gcController.heapGoal() change [timestamp, heap goal in bytes]	
		Experimental heap object events. IDs map reversibly to addresses.
	
		Experimental events.
	
		Experimental events.
	const traceEvNone traceEv = 0 // unused	
		Procs.
	const traceEvProcStart traceEv = 10 // start of P [timestamp, P ID, P seq]	const traceEvProcStatus traceEv = 13 // P status at the start of a generation [timestamp, P ID, status]	const traceEvProcSteal traceEv = 12 // P was stolen [timestamp, P ID, P seq, M ID]	const traceEvProcStop traceEv = 11 // stop of P [timestamp]	
		Experimental heap span events. IDs map reversibly to base addresses.
	
		Experimental events.
	
		Experimental events.
	const traceEvStack traceEv = 3 // stack table entry [ID, ...{PC, func string ID, file string ID, line #}]	const traceEvStacks traceEv = 2 // start of a section of the stack table [...traceEvStack]	const traceEvString traceEv = 5 // string dictionary entry [ID, length, string]	const traceEvStrings traceEv = 4 // start of a section of the string dictionary [...traceEvString]	
		STW.
	const traceEvSTWEnd traceEv = 27 // STW done [timestamp]	const traceEvUserLog traceEv = 44 // trace.Log [timestamp, internal task ID, key string ID, stack, value string ID]	const traceEvUserRegionBegin traceEv = 42 // trace.{Start,With}Region [timestamp, internal task ID, name string ID, stack ID]	const traceEvUserRegionEnd traceEv = 43 // trace.{End,With}Region [timestamp, internal task ID, name string ID, stack ID]	const traceEvUserTaskBegin traceEv = 40 // trace.NewTask [timestamp, internal task ID, internal parent task ID, name string ID, stack ID]	const traceEvUserTaskEnd traceEv = 41 // end of a task [timestamp, internal task ID, stack ID]	
		traceExperimentAllocFree is an experiment to add alloc/free events to the trace.
	
		traceNoExperiment indicates no experiment.
	
		traceNumExperiments is the number of trace experiments (and 1 higher than
		the highest numbered experiment).
	
		traceProcSyscallAbandoned is a special case of
		traceProcSyscall. It's used in the very specific case
		where the first a P is mentioned in a generation is
		part of a ProcSteal event. If that's the first time
		it's mentioned, then there's no GoSyscallBegin to
		connect the P stealing back to at that point. This
		special state indicates this to the parser, so it
		doesn't try to find a GoSyscallEndBlocked that
		corresponds with the ProcSteal.
	const traceRegionAllocBlockData uintptr = 65520	
		Maximum number of PCs in a single stack trace.
		Since events contain only stack id rather than whole stack trace,
		we can allow quite large values here.
	
		Timestamps in trace are produced through either nanotime or cputicks
		and divided by traceTimeDiv. nanotime is used everywhere except on
		platforms where osHasLowResClock is true, because the system clock
		isn't granular enough to get useful information out of a trace in
		many cases.
		
		This makes absolute values of timestamp diffs smaller, and so they are
		encoded in fewer bytes.
		
		The target resolution in all cases is 64 nanoseconds.
		This is based on the fact that fundamentally the execution tracer won't emit
		events more frequently than roughly every 200 ns or so, because that's roughly
		how long it takes to call through the scheduler.
		We could be more aggressive and bump this up to 128 ns while still getting
		useful data, but the extra bit doesn't save us that much and the headroom is
		nice to have.
		
		Hitting this target resolution is easy in the nanotime case: just pick a
		division of 64. In the cputicks case it's a bit more complex.
		
		For x86, on a 3 GHz machine, we'd want to divide by 3*64 to hit our target.
		To keep the division operation efficient, we round that up to 4*64, or 256.
		Given what cputicks represents, we use this on all other platforms except
		for PowerPC.
		The suggested increment frequency for PowerPC's time base register is
		512 MHz according to Power ISA v2.07 section 6.2, so we use 32 on ppc64
		and ppc64le.
	
		These constants determine the bounds on the GC trigger as a fraction
		of heap bytes allocated between the start of a GC (heapLive == heapMarked)
		and the end of a GC (heapLive == heapGoal).
		
		The constants are obscured in this way for efficiency. The denominator
		of the fraction is always a power-of-two for a quick division, so that
		the numerator is a single constant integer multiplication.
	
		Cache of types that have been serialized already.
		We use a type's hash field to pick a bucket.
		Inside a bucket, we keep a list of types that
		have been serialized so far, most recently used first.
		Note: when a bucket overflows we may end up
		serializing a type more than once. That's ok.
	
		Cache of types that have been serialized already.
		We use a type's hash field to pick a bucket.
		Inside a bucket, we keep a list of types that
		have been serialized so far, most recently used first.
		Note: when a bucket overflows we may end up
		serializing a type more than once. That's ok.
	const uintptrMask = 18446744073709551615	
		unwindJumpStack indicates that, if the traceback is on a system stack, it
		should resume tracing at the user stack when the system stack is
		exhausted.
	
		unwindPrintErrors indicates that if unwinding encounters an error, it
		should print a message and stop without throwing. This is used for things
		like stack printing, where it's better to get incomplete information than
		to crash. This is also used in situations where everything may not be
		stopped nicely and the stack walk may not be able to complete, such as
		during profiling signals or during a crash.
		
		If neither unwindPrintErrors or unwindSilentErrors are set, unwinding
		performs extra consistency checks and throws on any error.
		
		Note that there are a small number of fatal situations that will throw
		regardless of unwindPrintErrors or unwindSilentErrors.
	
		unwindSilentErrors silently ignores errors during unwinding.
	
		unwindTrap indicates that the initial PC and SP are from a trap, not a
		return PC from a call.
		
		The unwindTrap flag is updated during unwinding. If set, frame.pc is the
		address of a faulting instruction instead of the return address of a
		call. It also means the liveness at pc may not be known.
		
		TODO: Distinguish frame.continpc, which is really the stack map PC, from
		the actual continuation PC, which is computed differently depending on
		this flag and a few other things.
	const userArenaChunkBytes uintptr = 8388608 // min(userArenaChunkBytesMax, heapArenaBytes)	
		userArenaChunkBytes is the size of a user arena chunk.
	
		userArenaChunkMaxAllocBytes is the maximum size of an object that can
		be allocated from an arena. This number is chosen to cap worst-case
		fragmentation of user arenas to 25%. Larger allocations are redirected
		to the heap.
	
		userArenaChunkPages is the number of pages a user arena chunk uses.
	
		vdsoArrayMax is the byte-size of a maximally sized array on this architecture.
		See cmd/compile/internal/amd64/galign.go arch.MAXWIDTH initialization.
	
		vdsoBloomSizeScale is a scaling factor for gnuhash tables which are uint32 indexed,
		but contain uintptrs
	const vdsoDynSize uintptr = 70368744177663	const vdsoHashSize = 281474976710655 // uint32	const vdsoSymStringsSize = 1125899906842623 // byte	
		Maximum indices for the array types used when traversing the vDSO ELF structures.
		Computed from architecture-specific max provided by vdso_linux_*.go
	const vdsoVerSymSize = 562949953421311 // uint16	
		verifyTimers can be set to true to add debugging checks that the
		timer heaps are valid.
	const waitReasonChanReceive waitReason = 14 // "chan receive"	const waitReasonChanReceiveNilChan waitReason = 3 // "chan receive (nil chan)"	const waitReasonChanSend waitReason = 15 // "chan send"	const waitReasonChanSendNilChan waitReason = 4 // "chan send (nil chan)"	const waitReasonCoroutine waitReason = 37 // "coroutine"	const waitReasonDebugCall waitReason = 30 // "debug call"	const waitReasonDumpingHeap waitReason = 5 // "dumping heap"	const waitReasonFinalizerWait waitReason = 16 // "finalizer wait"	const waitReasonFlushProcCaches waitReason = 33 // "flushing proc caches"	const waitReasonForceGCIdle waitReason = 17 // "force gc (idle)"	const waitReasonGarbageCollection waitReason = 6 // "garbage collection"	const waitReasonGarbageCollectionScan waitReason = 7 // "garbage collection scan"	const waitReasonGCAssistMarking waitReason = 1 // "GC assist marking"	const waitReasonGCAssistWait waitReason = 11 // "GC assist wait"	const waitReasonGCMarkTermination waitReason = 31 // "GC mark termination"	const waitReasonGCScavengeWait waitReason = 13 // "GC scavenge wait"	const waitReasonGCSweepWait waitReason = 12 // "GC sweep wait"	const waitReasonGCWeakToStrongWait waitReason = 38 // "GC weak to strong wait"	const waitReasonGCWorkerActive waitReason = 28 // "GC worker (active)"	const waitReasonGCWorkerIdle waitReason = 27 // "GC worker (idle)"	const waitReasonIOWait waitReason = 2 // "IO wait"	const waitReasonPageTraceFlush waitReason = 36 // "page trace flush"	const waitReasonPanicWait waitReason = 8 // "panicwait"	const waitReasonPreempted waitReason = 29 // "preempted"	const waitReasonSelect waitReason = 9 // "select"	const waitReasonSelectNoCases waitReason = 10 // "select (no cases)"	const waitReasonSemacquire waitReason = 18 // "semacquire"	const waitReasonSleep waitReason = 19 // "sleep"	const waitReasonStoppingTheWorld waitReason = 32 // "stopping the world"	const waitReasonSyncCondWait waitReason = 20 // "sync.Cond.Wait"	const waitReasonSyncMutexLock waitReason = 21 // "sync.Mutex.Lock"	const waitReasonSyncRWMutexLock waitReason = 23 // "sync.RWMutex.Lock"	const waitReasonSyncRWMutexRLock waitReason = 22 // "sync.RWMutex.RLock"	const waitReasonSynctestChanReceive waitReason = 41 // "chan receive (synctest)"	const waitReasonSynctestChanSend waitReason = 42 // "chan send (synctest)"	const waitReasonSynctestRun waitReason = 39 // "synctest.Run"	const waitReasonSynctestSelect waitReason = 43 // "select (synctest)"	const waitReasonSynctestWait waitReason = 40 // "synctest.Wait"	const waitReasonSyncWaitGroupWait waitReason = 24 // "sync.WaitGroup.Wait"	const waitReasonTraceGoroutineStatus waitReason = 34 // "trace goroutine status"	const waitReasonTraceProcStatus waitReason = 35 // "trace proc status"	const waitReasonTraceReaderBlocked waitReason = 25 // "trace reader (blocked)"	const waitReasonWaitForGCCycle waitReason = 26 // "wait for GC cycle"	const waitReasonZero waitReason = 0 // ""	
		wbBufEntries is the maximum number of pointers that can be
		stored in the write barrier buffer.
		
		This trades latency for throughput amortization. Higher
		values amortize flushing overhead more, but increase the
		latency of flushing. Higher values also increase the cache
		footprint of the buffer.
		
		TODO: What is the latency cost of this? Tune this value.
	
		Maximum number of entries that we need to ask from the
		buffer in a single call.
	
		workbufAlloc is the number of bytes to allocate at a time
		for new workbufs. This must be a multiple of pageSize and
		should be a multiple of _WorkbufSize.
		
		Larger values reduce workbuf allocation overhead. Smaller
		values reduce heap fragmentation.
The pages are generated with Golds v0.7.6. (GOOS=linux GOARCH=amd64)