I'm not sure if this has been discussed already (I couldn't find a prior issue), but currently reflect.Type has no direct methods to retrieve the type arguments of generic types.

The only way to retrieve these type arguments is by parsing the string returned by Type.Name() which includes fully qualified package names and could look something like this in a complex case: MyGenericType[some/package.Foo, map[string]other/package.Bar]. This string is not easy to parse because of the possibility of nested type arguments, maps, slices, etc.

I propose adding methods to reflect.Type to retrieve these type arguments programmatically:

NumTypeArg() int
TypeArg(i int) Type

Comment From: ianlancetaylor

I'm not sure TypeParam is the right name here. The type parameters are the parameters that appear in the type definition. I think that what you are describing is what we usually call the type arguments.

Comment From: robpike

@ianlancetaylor Those terms are not the ones I know from my youth. We used to talk about formals and actuals, which carry a distinct meaning already. Modern computing's use of the terms parameters and arguments results in a domain-specific redefinition of true synonyms in normal English. However, I admit that redefinition is not unique to Go.

I wonder why this terminology shift happened.

Comment From: icholy

Adding methods to reflect.Type would be a breaking change.

Comment From: zephyrtronium

@icholy reflect.Type has unexported methods, so every type which implements it is in package reflect and can be updated along with the interface. Is there another sense in which it would be a breaking change to add methods to reflect.Type?

Comment From: ianlancetaylor

@robpike That's true, now that you mention that I remember "formal" and "actual" as well. But today even the Go spec speaks of "function parameters" and "function arguments".

Comment From: apparentlymart

FWIW I also learned "formal parameters" in school but have tended to use "parameter" and "argument" in my writing for at least the last decade or so because that seems (anecdotally) to be the current familiar jargon across various different language specs and tutorials.

It is unfortunate that in plain English "parameter" and "argument" are not clearly differentiated in the way that is intended in this context, but that seems to be a common characteristic of plain words adopted as jargon. The dictionary tells me that "formal" as a noun means "an evening gown" and that "actual" isn't a noun at all, so those words don't seem to be obviously better cognates. I think it's beneficial to go with the flow here and use terms that people are likely to have encountered in other languages and in tutorials. (even though this sort of jargon evolution does make me notice my age.)

With the obvious caveat that Wikipedia is a tertiary source rather than an authority, I do note that Parameter (computer programming) mentions both pairs of terms, but gives priority to "parameter" and "argument" while relegating "formal argument" and "actual argument" to secondary position. With that said, the section Parameters and Arguments does go on to acknowledge the inconsistency of usage and potential confusion between them.


Interestingly, one of the first tutorials I found when looking for examples -- the "Functions" section of A Tour Of Go -- seems at first read to be using these terms without defining them and switching somewhat carelessly from one to the other without explaining their relationship:

A function can take zero or more arguments.

In this example, add takes two parameters of type int.

As a reader who already knows the common meanings of "arguments" vs. "parameters" I probably wouldn't noticed this if I wasn't explicitly looking for examples. Perhaps what we can learn from this example is that the terms "argument" and "parameters" are so familiar to programming language learners that explicit definition and distinguishing remarks felt unnecessary here. (I'm assuming that a non-trivial number of people have successfully learned Go in part by following this tour.)

Comment From: atdiar

Seems about the same difference between variables and values to me. Parameter is to variable what argument is to value.

Comment From: aaronc

Happy to switch this proposal to TypeArgument, TypeArg or even TypeActual. What do people prefer?

Does the proposal otherwise sound reasonable?

Comment From: icholy

It should probably be

NumTypeArg() int
TypeArg(i int) Type

Comment From: aaronc

I've changed this proposal to use TypeArg for now. That seems reasonable. Happy to change to a different naming if people prefers otherwise.

Comment From: icholy

@aaronc NumTypeArgs should be NumTypeArg to match the rest of the relect.Type methods.

Comment From: aaronc

@aaronc NumTypeArgs should be NumTypeArg to match the rest of the relect.Type methods.

Updated

Comment From: mdempsky

Note that if you have a local defined type declared within a type-parameterized function, then that function's type parameters are also implicit type parameters of the defined type.

For example:

func F[X any]() any {
    type T int
    return T(0)
}

then F[X].T is implicitly parameterized by X.

Comment From: jonbodner

Is there any chance this issue will be revisited?

Comment From: ianlancetaylor

@jonbodner This proposal remains on the incoming queue.

Comment From: adonovan

BTW, the "formal" vs "actual" parameter terminology dates (as so many things do) from ALGOL, and is still widely used in academic work, but a wide variety of language communities seem to have standardized on the "parameter"/"argument" terminology.

Comment From: apparentlymart

We spent a bunch of time discussing the surface API shape, and it seems that landed in a good place where there's consensus. However, we haven't yet discussed implementation tradeoffs.

I'm far from an expert on this topic and so hopefully someone will correct me if I'm wrong on this ๐Ÿคž but I believe that currently only the compiler actually tracks the individual arguments for an instantiated generic type, while the final compiled program only has the string that's returned by Type.Name(), like the MyGenericType[some/package.Foo, map[string]other/package.Bar] string described in the original propsal.

That would mean that we'd also need to change the type information data structures to somehow track the type parameters in a way that allows recovering suitable Type objects for them.

Perhaps that means expanding abi.Type to have an additional field tracking an optional offset for some data retained elsewhere in the type information block, which would consist of a length followed by the specified number of TypeOff values. This optional offset could then be zero for a type that is not based on a generic type, so the overhead in that case is just one additional int32 value to store for each type.

Perhaps we could get extra fancy and add a new flag to TFlag whose presence redefines the Str NameOff field to be the offset of a variable-length structure containing both the type name and the array of TypeOff values instead of just the name, and then that flag could be set only for types that are based on generic types and the size of non-generic types would be unchanged. (TFlag seems to have three bits left unused, so there is room for this but of course these precious few bits might be better reserved for something else.)

Could the toolchain test whether there's any potential call to Type.NumTypeArg or Type.TypeArg anywhere in the program and skip including any new information at all if not? In that case, I imagine that a program which definitely uses neither of those functions would never set the new bit in TFlag, and so there'd be no size increase at all.

My intent in all of the above is to try to consider the cost of accepting this proposal for programs that would not actually use it, since I assume that only a small number of total Go programs need to inspect generic type parameters at runtime:

  • Growing the type information size for all types would be very unfortunate, and probably unacceptable?
  • Growing the type information size only for types based on generic types is better.
  • Growing the type information size only for types based on generic types and only when the program actually contains a call to Type.NumTypeArg/Type.TypeArg seems ideal, assuming my assumption is correct that most programs would not include any such calls.

Comment From: apparentlymart

My previous comment was trying to understand the cost of accepting this proposal.

I also note that this issue doesn't include much discussion of the benefit of accepting the proposal. What kind of program needs to be able to inspect the type information for generic type arguments at runtime?

I have previously written code1 where it was useful to dynamically detect a type argument for one specific generic type, but that's already possible to solve in at least two different ways today, depending on how much information you have at compile time:

type Example[T any] struct {
    Value T
}

func exampleArgType[T any](v Example[T]) reflect.Type {
    return reflect.TypeOf(T)
}
type Example[T any] interface {
    DoThing(T)
}

// exampleArgType takes a value whose dynamic type implements
// some instantiation of Example[T] and returns which T it implements
// it with. Results are unspecified if the given value does not implement
// any instantiation of Example[T].
func exampleArgType(v any) reflect.Type {
    outer := reflect.TypeOf(v)
    method := outer.MethodByName("DoThing")
    return method.Type.In(0)
}

It seems like this proposal would only be needed for situations where the program using this API knows nothing at all about the types it's going to be working with at runtime. Can anyone share concrete examples of such programs that would be useful to be able to write?

Comment From: Manbeardo

The only scenario that comes to mind where the user wouldn't have the type arguments readily available in the current scope would be when there's indirection between the types enumerated in the scope and the types being reflected upon. For example:

type Repeater[T any, U any] interface {
    Repeat(T, int) []U
}

type SelfRepeater[T any] = Repeater[T, T]

func doSomething[T any](t T, r SelfRepeater[T]) {
    // a reader can logically deduce that r is a Repeater[T, T], but
    // you can't directly reference the U arg from Repeater here
}

That doesn't seem especially useful to me. IMO, the bigger feature gap with generics and reflection is the lack of any way to interact with type constraints, but I also don't see a way to do anything useful after reflecting upon type constraints. Reflection could tell you that T satisfies cmp.Ordered, but you still wouldn't be able to call cmp.Compare(ta, tb) without additional language features like support for type switches on generically-typed values (related: https://github.com/golang/go/issues/45380).

Comment From: aclements

This proposal has been added to the active column of the proposals project and will now be reviewed at the weekly proposal review meetings. โ€” aclements for the proposal review group

Comment From: aclements

What's the motivation for this addition? The top post mentions that it's annoying to parse this information out of the type name, which is true, but it would help a lot to know why people want to get the type parameters.

There's a hazard that this may prevent optimization opportunities in the future, though @adonovan pointed out today that we already need enough information to determine type equality at runtime, which means differently instantiated types necessarily need distinct runtime representations.

There are some subtleties that haven't been addressed. For example, would these methods be able to reflect on type parameters to function types? The answer probably has to depend on whether it's a named function type because Go attaches type parameters to named types (function or otherwise) differently than it attaches type parameters to functions. For example, consider the following definitions (play link):

type S[T any] struct{}

func F[T any]() {}

type FT[T any] func()

S[int] and S[string] have different types because they're named, even though the underlying type is struct{} in both. This indicates there's necessarily something for reflect to recover the type parameters from.

F[int] and F[string] have identical types (func()), suggesting there's no way reflection could recover their type parameters. However, FT[int] and FT[string] have different types, again because they're named.

An implementation of this proposal would help clarify things around these semantic details, optimization issues, and binary size implications.

Comment From: jonathansharman

@aclements

What's the motivation for this addition? The top post mentions that it's annoying to parse this information out of the type name, which is true, but it would help a lot to know why people want to get the type parameters.

I can provide at least one example. At my organization, nicheinc, we use kin-openapi/openapi3gen to generate OpenAPI schemas from Go code. That package includes the ability to customize generated schemas using a SchemaCustomizerFn, which takes (among other info) a reflect.Type that can be used to identify Go types that should receive a non-default schema.

We also use a custom Set[T] type, which has an underlying type of map[T]struct{}. The Set[T] type is marshaled to/from a JSON array (via custom MarshalJSON and UnmarshalJSON methods). Therefore, when generating an OpenAPI schema, we would like any Set[T] to be treated as a JSON array, as opposed to a JSON object, which is openapi3gen's default behavior for a map type.

The natural way to do this would be to add a schema customizer that detects values of type Set[T], for any type T. However, that's currently not possible without resorting to parsing the type name and attempting to infer the reflect.Type of the type parameter, which is difficult and fragile. Instead, we're handwriting schemas for each concrete Set type that exists in our REST APIs, which doesn't scale at all and requires a lot of care. (This kind of repetition is exactly what generic types are designed to avoid!)

I'm hoping at some point in the future we'll be able to extract from a reflect.Type:

  1. Whether it's an instantiation of Set[T]
  2. The reflect.Type for T

This proposal would provide (2), but as written I don't think it would actually provide (1). For our use case, we would probably still have to do some reflect.Type.Name() parsing. Perhaps reflect should also provide a way to reflect (uninstantiated) generic types, which could then be used to detect whether a reflect.Type is an instantiation of such a type. That sounds like a bigger addition though and is possibly better suited for a separate proposal. Regardless, I think this proposal is a step in the right direction!

Comment From: apparentlymart

If you directly control the Set[T] definition and it lives in the same package as the code that wants to inspect it, I think that's an example of something that can be solved using one of the techniques I described in my earlier comment.

type Set[T] struct {
    // ...
}

func (s Set[T]) setElementType() reflect.Type {
    var elem T
    return reflect.TypeOf(elem)
}

func getSetElementType(v any) reflect.Type {
    type AsSet interface {
        setElementType() reflect.Type
    }
    s, ok := v.(AsSet)
    if !ok {
        return nil
    }
    return s.setElementType()
}

I am not at a proper computer right now so I'm afraid I just typed the above directly into this comment box without testing it, and so I probably didn't get it quite right but the general idea here is:

  • Using an unexported method name means that only types in the same package can implement this interface.
  • The signature of the method does not vary by T -- only the reciever does -- so it can be implemented by all concrete instantiations of the generic type.
  • The vtable of the interface value effectively serves as a proxy for the information about what type was used for T, but is generated by the compiler only for this type and so doesn't increase the size of the metadata generated for any other type in the program.

Of course I'm not meaning to say that this approach isn't without its disadvantages too. In particular, it involves a bunch of weird boilerplate specialized for each type that wants to do it. But I mention it both because it's something already possible in Go today (so it might be a useful pattern to follow in the meantime) and because I wonder if we could solve the use-cases of this proposal by making this sort of pattern less annoying to use (e.g. using codegen tools), rather than by changing the compiler and reflection metadata. ๐Ÿค”

Comment From: DeedleFake

@apparentlymart

You can simplify setElementType() a bit using reflect.TypeFor():

func (s Set[T]) setElementType() reflect.Type {
  return reflect.TypeFor[T]()
}

Comment From: jonathansharman

If you directly control the Set[T] definition and it lives in the same package as the code that wants to inspect it ...

We did consider something similar to this approach, but our Set type lives in its own separate module from the service code, and adding exported code only relevant to OpenAPI generation felt too invasive. It's also easy to imagine not controlling the Set library, in which case the only way to use that approach would be a wrapper type, which would affect the public APIs of our service modules.

I wonder if we could solve the use-cases of this proposal by making this sort of pattern less annoying to use (e.g. using codegen tools), rather than by changing the compiler and reflection metadata. ๐Ÿค”

It's definitely good to have other options. That said, type parameters are a first-class feature of Go, so the lack of support for them in reflect feels like a conspicuous hole to me (and apparently quite a few others, based on the reactions to this proposal). IMO inspecting type parameter info at run time should ideally not require what amounts to manual implementation of reflection (e.g. extra code generation steps or manually implemented methods like setElementType), even if compiler/standard library support comes with some overhead. (Reflection is an expensive operation that should be avoided in hot loops anyway.)

Comment From: apparentlymart

The fact that the technique I described requires that the author of the Set type anticipate that some users of that type might need to reflect over the element type is definitely a notable constraint and disadvantage, I agree.

Given this latest discussion it seems like one key motivation for this proposal is to allow a reflection-based package, such as a marshaling/unmarshaling library, to do something special for all instantiations of a specific generic type (and no other types) without making any modifications to the package containing that generic type.


I note that what we're discussing here is the start of exposing more "kind metaprogramming" through the reflection system. I don't mean to say that's necessarily a problem, but I mention it because exposing this information might in turn call for exposing other information, such as:

  • Given two reflect.Type values, are they both instantiations of the same generic type? e.g. comparing Set[string] and Set[int] would return true, but comparing Set[string] and SomethingElse[string] would not.

    Or, to ask this a different way, "what is the kind of this reflect.Type?".

    This is also something that can today be achieved by parsing what reflect.Type.Name returns, but this proposal already asserts that sort of thing as unsatisfactory.

  • Programmatically constructing a reflect.Type given a "kind" (some representation of a generic type) and a set of type parameters.

Because the language already had a fixed set of built-in generic types before it became possible to define new ones (map types, slice types, channel types, etc) there is already some special-cased support for "kinds" in the form of reflect.Kind, various kind-specific methods on reflect.Type, and kind-specific type constructors like reflect.MapOf. Though these features it's already possible to handle what this proposal wants and the two extra requirements I identified above for the built-in type kinds, but not for user-defined type kinds.

This makes me wonder if this proposal is going to end up acting as a first step toward introducing more general support for "kinds" in package reflect. And if so, do we need to think ahead a little about what that future API might look like so that this proposal is less likely to create a dead-end that can't grow to include the other requirements later?

I really, honestly don't want to grow the scope of this proposal to be "full support for user-defined kinds in package reflect", but I can't help but think that if this proposal were accepted then the other two I described above would not be far behind given that the main motivation we've identified is for reflection-based packages to do one of the three things that reflect already allows for the built-in kinds, and existing reflection-based packages already tend to do those other two things to the built-in kinds as well. ๐Ÿ˜–

(I will pre-concede that what the reflect package currently calls "kinds" doesn't map neatly onto what I'm calling "kind" here even if we did imagine it supporting user-defined generic types. For example, all instantiations of type Foo[E any] []E would appear in reflect today as having kind.Slice. Maybe what we have here is the "kind-level" equivalent of "named types" vs. "underlying types", where the Foo I just mentioned is a "named kind" called Foo whose "underlying kind" is slice. I don't like the conceptual messiness of that, but if that did make sense as a distinction then one path to evolving package reflect would be for the existing reflect.Kind to represent only "underlying kinds" and then introduce a new type that can represent both named kinds and underlying kinds, similar to how today's reflect.Type can represent both named types and underlying types. ๐Ÿค” )

Comment From: apparentlymart

(This is a continuation of my previous comment about reflect having broader support for "user-defined kinds".)

I spent a while reading through the language spec to remind myself what words it uses to talk about the different concepts involved here. It seems like the current term for what I was calling "user-defined kinds" is "generic types", which is handy because that's a term that does not conflict in any way with reflect.Kind. I also misused the term "named types" when I should've said "defined types".

Therefore, a hypothetical expanded API proposal:

package reflect

type Type interface {
    // (everything that was there before, and...)

    // GenericType returns the generic type that this type is an instantiation of.
    //
    // If the receiver is not an instantiation of a generic type, returns nil.
    // (TODO: It could potentially return a degenerate GenericType where
    // NumTypeParam returns zero, which might be easier to work with?)
    //
    // The result of this method for two types compares equal if both types
    // are instantiations of the same generic type.
    GenericType() GenericType

    NumTypeArg() int
    TypeArg(i int) Type
}

// GenericType represents an uninstantiated generic type.
type GenericType interface {
    // Name returns the generic type's name within its package for a defined generic type.
    // For other (non-defined) generic types it returns the empty string.
    Name() string

    // PkgPath returns a defined type's package path, that is, the import path
    // that uniquely identifies the package, such as "encoding/base64".
    // If the type was predeclared (string, error) or not defined (*T, struct{},
    // []int, or A where A is an alias for a non-defined type), the package path
    // will be the empty string.
    PkgPath() string

    // Kind returns the specific kind of this generic type.
    Kind() Kind

    // NumTypeParam returns the number of type parameters of this generic
    // type. TypeParam returns the constraint of the parameter at the given
    // index, as a type whose kind is always Interface.
    NumTypeParam() int
    TypeParam(i int) Type
    // TODO: Should we expose the names of the parameters too?

    // (probably some other methods would be useful too, but
    // I'm going to keep this relatively small for now.)
}

// GenericTypeFor is a shorthand for TypeFor[T]().GenericType().
func GenericTypeFor[T any] GenericType

// OfGenericType returns an instantiation of the given GenericType with
// the given argument types.
//
// Panics if the number of arguments does not match the generic type's
// parameter count.
func OfGenericType(g GenericType, args ...Type) Type

This API supports all three of the things I identified in my previous comment:

  • You can get the type arguments of an instantiation of a generic type just as this proposal originally requested, using Type.NumTypeParam and Type.TypeParam.
  • You can test whether two types are instantiations of the same generic type by calling Type.GenericType on each and comparing the results.
  • You can dynamically instantiate a generic type with arguments decided at runtime by calling OfGenericType.

    (I think that this one in particular is glossing over a lot of complexity, since currently the set of instantiations of all generic types is fixed at compile time. For example, calling reflect.Zero(reflect.OfGenericType(...)) would need to materialize something that methods could be called on, which I expect would force a "type erasure" strategy for generics, and the generics proposal intentionally avoided anything that would force such a strategy. So I think a real proposal for this should probably leave out this part; I included it only to show how it might fit into package reflect's API if we wanted to include it, but in practice it's probably infeasible unless it came with the limitation that it can only be used with instantiations that were already present in the compiled program somehow.)

With this API I'm imagining that it would be valid to, for example, call GenericType on a result from reflect.SliceOf and get a GenericType representing the built-in slice kind, and so this API would work for both defined generic types and the built-in generic type kinds. However, that idea might not actually have legs because it raises annoying questions about whether e.g. function types are "generic types", pointer types are "generic types", etc. The collection type kinds seem relatively intuitive, but others not so much. I'd ponder on this more in a real proposal. ๐Ÿคทโ€โ™‚๏ธ


If this seems like a plausible direction I should probably take it to a separate proposal issue rather than derailing this one, so I'm sharing this here mainly to see if it seems worth the effort to write something like that.

Maybe we can avoid making more notification noise for folks already subscribed to this issue by using a ๐Ÿš€ reaction if you think it would be worth exploring this more as a separate proposal, or ๐Ÿ˜• if you think that would not be worth the time. (I'm intentionally not specifying ๐Ÿ‘ and ๐Ÿ‘Ž here because I want to keep "is it worth writing a proposal?" separate from "is this a feasible solution to the problem(s)?"; not all feasible ideas are worthy proposals.)

Comment From: Manbeardo

@apparentlymart I'm in the "figure out what full support for user-defined kinds looks like first" camp because I struggle to see how this proposal alone could actually unblock the use cases that folks are mentioning. I've attempted creating example code for several different use cases, but they all wound up needing to resort to unsafe type assertions/conversions and liberal use of any even if I could get the reflect.Type for the type parameter(s).

IMO, a truly satisfactory result that allows users to safely build reflection-based tools for generic types might need a reflect/v2 that's designed from the ground up with generics in mind.

Comment From: Merovius

@apparentlymart

Programmatically constructing a reflect.Type given a "kind" (some representation of a generic type) and a set of type parameters.

That is not going to happen. There are reasons why you can not use an uninstantiated generic type/function in Go. Being able to do it in reflect would be strictly harder.

It would be feasible to provide a way to check if a two types are instantiations of the same generic type (I think the API you outline mostly works, except for OfGenericType). But in general, the reflect package is a dynamic form of Go's type system. And Go's type system has no way to do this. So even that seems out of scope for reflect. And yes, this proposal itself would be out of scope for the same reason.

So really, I think that's the main argument here: the use cases mentioned aren't actually something you can do statically, so they don't really fit into reflect either.


In #69559 I handwaved at this feature for a very similar use case to the one @jonathansharman brought up and after writing that, I came to the same conclusion they did: that really, this feature alone would be useless. It is actually a mere convenience: if you know that a reflect.Type is an instatiation of some generic type, you can look up the type-arguments by extracting it from a method or field it appears as a parameter in (unless the type parameter is a shadow type). So all this proposal does, is making that step more convenient.

I think maybe a better way to address both of these use cases, more in line with the philosophy of reflect, would be to fix one of the long-standing holes in reflect: the inability to construct interfaces dynamically. It doesn't help you to check for the actual concrete type. But it lets you construct an interface type that it implements and gives you the methods you need. And then "type-assert" on it.

Though come to think of it, even that is probably not super helpful - you can theoretically already do the equivalent by just calling the methods by name. I was hoping it might give a way to effectively do that while avoiding MethodByName and the unpleasant consequences it has on link time optimization. But given that the two might be similar in power, InterfaceOf would probably come with the same pitfalls. ๐Ÿค”

Comment From: apparentlymart

I agree that OfGenericType cannot work in today's Go, and likely would never work in any future Go, and acknowledged that in the previous comment already. I included it because the goal of this subthread was to explore what future additions to reflect might look like from an API design perspective mainly to try to avoid this particular idea painting reflect into a corner from an an API design perspective.

For me the other parts seem defensible in that I think they are technically feasible to implement and analogous to operations reflect already offers for the built-in type kinds: reflect.GenericType is analogous to reflect.Kind, Type.GenericType is analogous to Type.Kind, and NumTypeArg/TypeArg are analogous to Type.Elem, Type.Key and the various other kind-specific methods of reflect.Type. It's still reasonable to decline to support them even if they are technically possible though, because they seem to imply some non-trivial costs.

I realize I wasn't very clear in what I was intending to achieve in my previous comment, so I'll state the two goals explicitly:

  • Primarily, I was trying to sketch what hypothetical future support for these features might look like to gauge whether what this proposal was proposing would fit in to that bigger picture. (and, as far as I'm concerned, the answer was "yes".)

    I am not yet proposing anything from what I wrote other than the part that exactly matches the original proposal. The emoji voting was intended to gauge interest in a broader proposal like that without necessarily growing the scope of this one to include those other features, since they seem backward-compatible.

  • Secondarily, I hoped to more concretely illustrate what I meant about this proposal seeming incomplete for the use-cases that we've discussed so far. In particular, if you want to recognize instantiations of a hypothetical Set[T] generic type and do dynamic behavior based on T (without modifying the package defining Set[T]) then the current proposal only solves the second half of that problem.

    Much as reflection-based marshaling libraries tend to need to ask questions of the form "is this a map, and if so what is its element type?" (in encoding/json, for example), handling generic collection types in a similar way also seems to require both parts, and so we might be struggling to find good use-cases for this proposal specifically because this proposal is incomplete. ๐Ÿคทโ€โ™‚๏ธ

For the purpose of this issue, that API sketch is only for thinking about whether what this issue proposes is complete enough and, if not, whether a future proposal could make it complete enough in a backward-compatible way. I don't want to turn this issue into a discussion about that broader proposal except insomuch as it helps to decide whether the smaller API proposed in this issue is worth pursuing.

Comment From: Merovius

Much as reflection-based marshaling libraries tend to need to ask questions of the form "is this a map, and if so what is its element type?" (in encoding/json, for example), handling generic collection types in a similar way also seems to require both parts

I really don't think so, for marshalling specifically. It seems to me, that if encoding/json wants to handle marshalling container types specifically, it should do so not by asserting on a specific concrete type, but check that it has an All() method of an appropriate type. That way, it can work with arbitrary user-defined collections as well. Here is a quick and dirty example of how that can work right now (for unmarshalling, we would obviously need methods to modify the container instead).

It would be more convenient if that code could just assert on some generic interface or something. But it works without adding extra reflect API.

There might be other use cases, of course. But I think even for @jonathansharman's use case, I would recommend an approach like this.

Comment From: apparentlymart

That's an interesting alternative!

I guess I'm imagining that as similar in concept to encoding.TextMarshaler and encoding.TextUnmarshaler, which any "string-like" type can implement and then in principle be supported for any marshalling library that can work with strings.

We could decide to add some additional interfaces to package encoding to represent broad concepts like "list marshaler" and "map marshaler" (and the corresponding unmarshalers) which could be implemented by types whose most natural serialisation is a list or map, and then any serialisation format that has a list-like or map-like concept could use it.

That seems like an interesting compromise, as long as these interfaces include some way to reliably get the generic element types of an arbitrary instantiation of one of those interfaces, such as by directly exposing reflect.Type/reflect.Value in the API, or using any, instead of trying to make the interfaces compile-time generic.

Comment From: Merovius

I don't understand what you mean. What would those interfaces be useful for?

How a list is encoded is highly format specific, so it doesn't make sense to have a general interface for that. How the keys/values are encoded can already be specified by implementing the corresponding interfaces on the key/value type. And the All() method is already an established convention for containers, so how to get at the elements is also already standardized. And if a container type wants to do something more esoteric, it can always implement json.Marshaler (or the corresponding format-specific interface) itself, to overwrite the default behavior.

One thing I will say is that this approach requires to use MethodByName. I would be interested if there is a way to do this, which doesn't require that and doesn't have the same downsides (preventing the linker from stripping unused methods).

(also, this discussion is of course technically a side-track for this proposal. Except that it's the main use case we've come up so far, so if there already is an alternative, this proposal isn't needed. But maybe we should explicitly file a proposal for encoding/json to marshal containers/iterators)

Comment From: apparentlymart

I guess there is a philosophical question here about whether "I can be iterated as a sequence" implies "it makes sense to marshal me as a list-like construct". I think you are assuming it does, and I was making the opposite assumption and so implementing the interfaces I described are how an author would express the latter.

However, I don't have a strong opinion about it. Either way we still have the question of how to ask whether a particular type implements the interface for any type arguments, and to determine what those type arguments are, which is not clear to me yet.

Comment From: Merovius

Either way we still have the question of how to ask whether a particular type implements the interface for any type arguments, and to determine what those type arguments are, which is not clear to me yet.

See the playground example. [edit] Realized that reflect already has support for iter.Seq{,2}, which simplifies things. [/edit]

Comment From: apparentlymart

Ahh, okay! This is in a similar direction as the second example from my earlier comment. For some reason I thought you were proposing something a little different than that, but in retrospect I see that was entirely my misreading. Sorry for the noise.

So, putting the specific API details aside for a moment, it seems like we could boil this alternative proposal down to: serialization libraries should be checking for certain method names -- effectively emulating a check for a "generic interface" despite reflect not supporting that first-class -- instead of trying to detect all instantiations of a specific concrete type.

Whether that certain method is something generic like All() iter.Seq[K] or something that explicitly communicates "I'm best serialized as a list" is a good second-level question on top of that, but to start I'm mainly curious about whether we have consensus that this method-detection-based approach (regardless of specific method name/signature) seems viable for the use cases already presented.

Comment From: jonathansharman

My two cents is that checking for method names is an acceptable workaround but not ideal. Returning to the example of openapi3gen schema customization, there's another type whose schema we customize: nup.Update. That's a custom type of ours we use during JSON (un)marshalling to distinguish fields that are missing, null, or non-null (since by default missing and null are unmarshalled into Go the same way). We would like fields of this type to receive an OpenAPI schema equivalent to the element type.

Currently, very much like the suggestion to check check for All() iter.Seq[K], we're using the existence of a ValueOrNil() *T method to detect this type. But that technique feels roundabout, and it runs the (small) risk that we catch a different type with the same kind of method by mistake.

In the general case, what if someone wants to customize behavior for a generic type that doesn't have a sensible method to check for? Hopefully they could use FieldByName, but again that's pretty oblique. I think it's a reasonable expectation that reflect should enable customization for particular generic types, in the same way that it already does for both non-generic and built-in generic types. Checking for implementation of an interface is just not the same operation as checking for a type or set of types, even if substituting the former for the latter often works well enough.

Comment From: Merovius

I think it's a reasonable expectation that reflect should enable customization for particular generic types, in the same way that it already does for both non-generic and built-in generic types.

I agree that the builtins are a strange beast and will likely continue to be one indefinitely.

But I don't believe there is anything you can do with non-generic types that you can't with generic types, today. You can compare values of generic types, you can iterate over fields and methods, call methods, read and write to fieldsโ€ฆ If you don't know anything about a type, the powers that reflect gives you is completely independent on whether that type has type arguments or not. Think about it: if you don't know the type of a value statically (i.e. if you don't import the package it is defined in) what can you do to it, that you can't do to something like *hash.Map[K, V]?

The way we specialize libraries for a given types is by attaching methods or fields to it. If you want your type to work with encoding/json, you have two options: you add struct tags, or you implement json.{Unm,M}arshaler. Both of these you can do with generic type definitions. I think it's reasonable to ask for use cases why that is not enough.

I'll note that not having to teach reflect about generics was an intentional feature of the design. Deviating from that is a significant step. Especially since the outcome will always feel limited - in particular, you will never be able to go from "generic type" to "instantiation".

Comment From: aclements

Based on the discussion above, this proposal seems like a likely decline. โ€” aclements for the proposal review group