Proposal
I'd like to propose basic package reflection in reflect
to support discovery of types, functions, and variables in a package.
Similar to how value reflection allows discovery of methods and fields given a struct value; package reflection should allow discovery of functions, variables, and types given a package.
Problem
This closes a gap that exists today where code using reflect
must be "pre-loaded" with the types it should know about, making every new type, function, or variable that "wants" to be reflected first register itself with the reflector. This is not up to the standard of how reflection of structs works, where reflecting code can iterate over every struct field and method.
Example Usage
One possibility is to retrieve a package from an "anchor" type:
import "reflect"
import "time"
t := reflect.TypeOf(time.Time{})
p := t.Package() // Returns a reflect.Package
Another, which uses a modified syntax, allows direct retrieval:
import "reflect"
import "time"
p = reflect.PackageOf(time) // Returns a reflect.Package
Example Interface
A reflect.Package
might respond to similar calls as a reflect.Value
:
type Package interface {
// Equal reports true if p is equal to q.
// For two invalid packages, Equal will report true.
// Otherwise, Equal will report true if p and q
// refer to the same package.
Equal(q Package) bool
// Function returns a function value corresponding to p's i'th function.
// Function panics if i is out of range.
Function(i int) Value
// FunctionByName returns a function value corresponding to the function
// of p with the given name.
// It returns the zero Value if no function was found.
FunctionByName(name string) Value
// NumFunction returns the number of functions in the package p.
NumFunction() int
// NumType returns the number of types in the package p.
NumType() int
// NumVar returns the number of fields in the package p.
NumVar() int
// PkgPath returns a defined package's path, that is, the import path
// that uniquely identifies the package, such as "encoding/base64".
Path() string
// String returns a string representation of the package.
// The string representation may use shortened package names
// (e.g., base64 instead of "encoding/base64") and is not
// guaranteed to be unique among types.
// To test for package identity, compare the Packages directly.
String() string
// Type returns a type corresponding to p's i'th type.
// Type panics if i is out of range.
Type(i int) Type
// TypeByName returns a type value corresponding to the type
// of p with the given name.
// It returns the zero value if no type was found.
TypeByName(name string) Type
// Var returns the i'th variable of the package p.
// It panics if i is out of range.
Var(i int) Value
// VarByName returns the package variable with the given name.
// It returns the zero Value if no variable was found.
VarByName(name string) Value
}
Comment From: ianlancetaylor
This makes it impossible for the linker to discard unused functions and variables, which it does today. I don't think this is realistically feasible.
Comment From: mvdan
To further drive @ianlancetaylor's point: this would make most Go binaries significantly larger, considerably hurting https://github.com/golang/go/issues/6853.
Comment From: glacials
Is this different than what happens for unused struct methods and fields?
Comment From: thediveo
The "unused struct fields" aren't removed because they are typically needed in order to interface with syscalls, C libraries, et cetera. Such fields might be used for padding or could be left-overs from syscall API changes over time. In any case, I don't think that you can compare unused struct fields with unused functions. If your proposal would be implemented as is, then my Wireshark plugin would be multiple times its current size, because all of a sudden there would be "tons" of Azure and Kubernetes API client code included, even if not used at all.
Maybe you might would like to be able to discover only the functions and types that actually end up in a binary, because they are used? Maybe more akin to this thwd's answer to How to discover all package types at runtime?
Comment From: glacials
I see, thanks for explaining. For the problem I'm trying to solve, which is something akin to first- and second-party plugin management, the packages being excluded from the binary would be a show-stopper.
So it sounds like this is a nonstarter, although I'm curious to hear if there are practical alternatives outside of having the importing package maintain a list of explicit references to names it needs, even when all further usage is through metaprogramming.
Options I've looked at are [plugin
](https://pkg.go.dev/plugin) which creates too much fragility for us, and [
github.com/hashicorp/go-plugin`](https://github.com/hashicorp/go-plugin) which seems complex for our needs.
Comment From: Splizard
This makes it impossible for the linker to discard unused functions and variables, which it does today. I don't think this is realistically feasible.
Only when the package is used for reflection, if the package is not used in this way, unused functions and variables can be discarded as usual. Similar to runtime reflection data, which isn't stored if the value is never stored within an interface value.
Comment From: ianlancetaylor
If one small package somewhere uses this new facility, then the linker can't discard anything from any package anywhere. That's a hazard for any large Go project.
Comment From: Splizard
If one small package somewhere uses this new facility, then the linker can't discard anything from any package anywhere. That's a hazard for any large Go project.
If you can call Package()
on arbitrary reflect.Type
values, sure. Easy adjustment to the proposal is to disallow this and restrict package reflection to explicit package selection calls ie. reflect.PackageOf(time)
.
Comment From: earthboundkid
Re: plug-ins, this talk was interesting and lists several ways of doing it: https://youtube.com/watch?v=pRT36VqpljA
Comment From: Jorropo
Currently trying to pass a package identifier as anything gives use of package time not in selector
compile-time error.
PackageOf
would be an adhoc rule of this which I would find confusing.
As far as I can think about only unsafe
and builtin
are allowed that luxury of special language rules right now, both since they happen to be created before the existence of generics and unsafe
because having it return constants is just too useful and you should know what you are doing anyway if using it.
IMO in this case the gains does not outweigh the cost of an other std exception to the language rules.
Comment From: ianlancetaylor
I see that you suggested reflect.Package(time)
in the original post, but I don't understand why a program would want to get a list of functions in a known package. What is the use case for that?
Comment From: earthboundkid
ISTM, it would probably be better as reflect.Package("time")
because apart from the technical issues with bare package names, then you can do pkgName := discoverPlugins(); pkg := reflect.Package(pkgName)
.
Comment From: Jorropo
@carlmjohnson This would be opening the door to package names being passed at runtime. Preventing almost all code pruning. What if someone pass a package that is not used at all by the binary, should the runtime download sources from proxy.golang.org, build it in memory, and load the result JIT like ?
You could make the signature always require a const string
but that is not a thing the language offers and would be yet an other exception.
Comment From: thediveo
@Jorropo please don't overshoot here, this comes across as trying a little bit too hard to push the original request over the cliff. Instead, let's try to see the chances here.
From my own experience with plugin management of statically build-in plugins I can imagine a benefit of being able to discover the actually linked-in modules, and then their exported types (again, only the actually included ones).
However, I'm unclear as to how to ensure that the plugin functions to be exported actually get linked in, as just underscore importing their containing packages isn't sufficient? Does this mean that this idea of package inspection would fall flat because the desired pruning would cause the p,ugin functions to not even being linked into the final binary?
Comment From: sblackstone
There are some use cases for being able to get the name of the package the code originated from - logging.
I often use a function like pkgLogger()
which just returns a zerolog instance with the package name set... This could be cleaned up significantly if I didn't need to reimplement it all over the place.
Comment From: earthboundkid
runtime.Frame.Function
already contains the package name of a stack frame that you can use for logging. I guess the scenario you're talking about is logging based on the package of a struct instead of the package of a function?
Comment From: kcross-ctoken
Hmmm ... I think this would be better done by adding some built in package functions to be able to iterate over public members of the package.
import "reflect"
import "time"
//IT would be nice if we can use generics but 🤷
type Variable[T] struct{
String Name
Reflect.Type
... some hidden stuff to be able to get and set.
}
func ( v * Variable[T]) Get() iter.Seq[*T] {}
func ( v * Variable[T]) Set( value T) {}
for var := range time.Variables() {
....
}
// Similar with constants and functions
// time.Functions()
// time.Constants()
Comment From: apparentlymart
Reflecting on this quite some time later (since the previous comment put it back on my radar 😀 ) it seems like the motivating use-case is not very well defined and so it's hard to know what alternative solutions might be acceptable.
There is some high-level idea about it being for runtime-loaded plugins discussed in https://github.com/golang/go/issues/61796#issuecomment-1668667236, but with plugin systems I've encountered before there's typically some expectation of each plugin implementing some specific API rather than just trying to enumerate all of the symbols declared in it.
I'd be interested to learn more about the details of the use-case to understand to what extent it's necessary for the unit of analysis to be an entire package rather than, for example, a struct type containing a mix of normal data fields and function pointer fields, or some other thing that package reflect
already knows how to analyze.
I'm asking this only in the hope that it might move the discussion forward to concrete alternatives that have better feasibility than whole-package analysis.
Comment From: kcross-ctoken
Sure ... my current usecase is to feed back to a ui frontend the enum declarations in a package to avoid a static compile time generation. i.e.
type Fruit int
const {
Banana Fruit = 1
Apple Fruit = 2
Orange Fruit = 3
}
type FruitDescriptor struct{
Name string
Value Fruit
}
//Instead of doing this
func GetFruits() []FruitDescriptor {
return []FruitDescriptor { { "Banana", Banana} ...}
}
If I don't want to tie up the ui implementation into the package (define the FruitDescriptor in the UI) also I don't want to have to update a get function each time it changes 🤔 An alternative is that we could just generate via a code generator ... but that feels like a hack considering in other languages like java this is pretty trivial to do.
Comment From: Splizard
@kcross-ctoken you don't need package reflection or code generation to do this if you use sum types or enums instead of plain constants.
https://github.com/qlova/tech/blob/master/sum/sum.go
https://github.com/grow-graphics/gd/blob/master/variant/Enum/enum.go
Comment From: kcross-ctoken
TBH, Im not sure why I need another package 🤷 when reflect should be able to tell me this.
Comment From: apparentlymart
Thanks for the extra detail, @kcross-ctoken.
So I guess that variation of the problem is rooted in the fact that Go doesn't currently have "real" enumerations, in the sense of a set of values that are directly associated with a type. Instead, we have a convention of declaring package-level constants of that type, which transforms the question "what are the enumerated values of this type?" into "what constants of this type are declared in the same package as the type?".
That's similar to the work that the stringer
tool needs to do, but impossible to do at runtime because the needed metadata is not included in the final executable.
For that problem in particular we could imagine a more constrained design of putting the "what constants of this type are declared in the same package as this type" logic in the compiler, saving the result as part of the metadata for each type in the executable, and adding a new method to reflect.Type
which returns an ~iter.Seq[reflect.Value]
~ iter.Seq2[string, reflect.Value]
of them.
That would at least limit the new reflection metadata bloat only to types that have nearby constants in the same package, which I expect is considerably less than retaining the entire symbol table of every package. Some might object to even that much extra bloat though, and suggest that this should be opt-in for each type that needs it. If it's opt-in then arguably that opt-in could be to add a go:generate
directive comment that runs a stringer
-like tool to generate the needed metadata at development time rather than inside the compiler. 🤔
That package Enum
linked in an earlier comment shows a different shape of opt-in by declaring the type using the type parameter of a generic type, which is a clever idea but I'd agree that I wouldn't want to use something like that unless it was included either in the same library that uses the metadata or in the standard library, because otherwise I'd be making a third-party package part of the public API of my own package and have that dependency be a compatibility constraint forever. 😬
I note that there are already several other proposals for adding more complete support for "enumerations" to Go. In this comment I'm trying to focus only on the problem of runtime reflection because that was the original framing of this proposal and the framing in https://github.com/golang/go/issues/61796#issuecomment-3056856262, but more general enum support would presumably solve this and some other problems at the same time.
For easier reference, here are the other enum proposals I'm aware of:
- https://github.com/golang/go/issues/19814
- https://github.com/golang/go/issues/28438
- https://github.com/golang/go/issues/28987
...and here are some others that are not directly about enumerations, but discuss other concepts that are somewhat-enum-like:
- https://github.com/golang/go/issues/57644
- https://github.com/golang/go/issues/54685
- https://github.com/golang/go/issues/19412
Comment From: jimmyfrasche
I brought up something similar in one of those enum proposals. You'd want an iter.Seq2[string, Value]
to access the name as well, though. The obvious downside is that it would add to the size of the binary to record all of that but I would imagine it's a negligible proportion and that there are some clever ways to cut that down in practice
edit: found my earlier post: https://github.com/golang/go/issues/19814#issuecomment-392178142
Comment From: kcross-ctoken
Yeah it does seem that enums support would solve my problem listed above without runtime package inspection.
Comment From: jimmyfrasche
@apparentlymart
If it's opt-in then arguably that opt-in could be to add a
go:generate
directive comment that runs a stringer-like tool to generate the needed metadata at development time rather than inside the compiler. 🤔
Here's a sketch of how that might work:
Have a runtime package
package reflectconst // or a better name
func Lookup(reflect.Type) iter.Seq2[string, reflect.Value]
// 2/3rds of the API is private and should only be used by the go:generate tool
type Pair struct { Name string; Value reflect.Value }
func Register(reflect.Type, ...Pair)
Have a tool that finds types with a //something:something
directive and use go/types
to grab the types and its values in the current package and generate a file containing init()
that make the appropriate Register
call.
The major downsides against having it built in is that you don't get it for arbitrary types and it would be doing a bunch of reflect calls during init. You could let it generate registrations for arbitrary types but you'd still need to know which ones you need at development time. You could minimize the reflect calls with Register[T any](...Pair[T])
and doing the calls to reflect.Value
in a sync.Once
, I suppose.
Comment From: Splizard
Im not sure why I need another package
You don't, you can literally copy the 100 lines into your existing package (or into an internal package if you want to share it).
@apparentlymart see above, also, you are only exporting a type and a variable used to access the predefined values of the enum. Users don't need to import any enum-specific functions. Can you elaborate on where the coupling/dependency is here?
Comment From: apparentlymart
I was commenting on package Enum
only to acknowledge that it was an interesting alternative to what I was describing. I don't have any particularly strong feelings about it beyond what I mentioned.
I'm sorry if my passing comment read as particularly negative. I only meant that in general (not just in this specific case) I hesitate to have symbols for non-stdlib packages that I don't directly control in the public API of a library I'm writing, because I would not want to pressure someone else to keep publishing and maintaining a library they provided just to keep my own library viable; I prefer to keep my indirect dependencies only as an implementation detail so I can swap them out later if needed for any reason.
We probably shouldn't get too into all the different possible approaches to implementing enums here, because there are already several other issues about that topic. I was trying to keep my original comment mostly focused on the question of what we might be able to add to package reflect
in particular to address the use-case that was presented, since this issue started as a reflection proposal. (The mention of go:generate
was a preemptive concession that an opt-in codegen approach need not be in the compiler or standard library at all, not actually a proposal that we necessarily should push for that sort of design.)
Comment From: Splizard
@apparentlymart cheers, not read as negative, if this is a concern of yours, you can copy and paste 100 lines of code into your project. I think this is valuable to bring up, the use-case that was presented to support this issue can be addressed with 100 lines of code and/or code generation. I think there needs to be a stronger use-case for package reflection, then just "reflectable enums", which can be done in multiple ways today.
Comment From: jimmyfrasche
I included the sketch of a go:generate
tool to demonstrate the feasibility of that approach but more importantly to examine the pros and cons vs. it being built into reflect
: a tool would store less data but need more run time to process it and it would be limited to the types that you knew you wanted to use ahead of time.
Comment From: apparentlymart
Indeed, I think that's the main question I keep coming back to: is it actually a requirement that it be possible to enumerate the constants for any arbitrary type decided at runtime, or are the use-cases for this ones where the person defining the type knows that they intend to use it with something that requires those values at development time?
The original discussion was about a plugin system where I'd assume that someone implementing a plugin knows ahead of time which system they are intending to plug into and what that system requires, and so I'd expect explicit code generation to be sufficient for that scenario. But of course I realize that was an entirely separate use-case from enumerating constants associated with a type, and so I shouldn't try to blur across both sets of requirements.
@kcross-ctoken in https://github.com/golang/go/issues/61796#issuecomment-3056856262 you suggested that code generation would be viable and that having it built in to the language was more of a preference than a hard requirement for solving that problem. Is that right? I don't mean to suggest that it being only a preference is necessarily grounds to reject incorporating it into the language, but "I can't solve my problem at all without this feature" is likely a heavier weight in the cost/benefit tradeoff than "I have a solution that works, but I would prefer something else".
There are, unfortunately, lots of things that Java can do that Go can't as a result of significantly different implementation tradeoffs, and so Go already tends to favor code generation in various situations where Java would not. 🤔