-1

I have to keep multi type struct in slice and seed them. I took with variadic parameter of interface type and foreach them. If I call the method of interface it works, but when I trying to reach to struct I can't. How can I solve that?

Note: Seed() method return the file name of datas.

The Interface:

type Seeder interface {
    Seed() string
}

Method:

func (AirportCodes) Seed() string {
    return "airport_codes.json"
}

SeederSlice:

seederModelList = []globals.Seeder{
        m.AirportCodes{},
        m.Term{},
    }

And the last one, SeedSchema function:

func (db *Database) SeedSchema(models ...globals.Seeder) error {
    var (
        subjects []globals.Seeder
        fileByte []byte
        err      error
        // tempMember map[string]interface{}
    )
    if len(models) == 0 {
        subjects = seederModelList
    } else {
        subjects = models
    }
    for _, model := range subjects {
        fileName := model.Seed()
        fmt.Printf("%+v\n", model)
        if fileByte, err = os.ReadFile("db/seeds/" + fileName); err != nil {
            fmt.Println("asd", err)
            // return err
        }
        if err = json.Unmarshal(fileByte, &model); err != nil {
            fmt.Println("dsa", err)
            // return err
        }
        modelType := reflect.TypeOf(model).Elem()
        modelPtr2 := reflect.New(modelType)
        fmt.Printf("%s\n", modelPtr2) 
    }
    return nil
}

I can reach exact model but can't create a member and seed.

Jonathan Hall
  • 75,165
  • 16
  • 143
  • 189
icsarisakal
  • 193
  • 7
  • 2
    You can access the concrete struct and its fields by using a type assertion or type switch. Or, as an alternative, reflection can also help. – mkopriva Sep 20 '22 at 08:21
  • @mkopriva I will seed about 30 models. I don't think this makes sense the type switch. If you know reflection, can you share it? – icsarisakal Sep 20 '22 at 08:29
  • 2
    This looks like a sub-optimal design. Which field(s) of the structs are you trying to access? – Jonathan Hall Sep 20 '22 at 08:47
  • @Flimzy Structs coming dynamicly. I fill all fields of current struct and create row with gorm – icsarisakal Sep 20 '22 at 08:53
  • 1
    Definitely sounds like an awkward design. But without more information, I'm not sure what to suggest as an alternative. – Jonathan Hall Sep 20 '22 at 09:04
  • @Flimzy Actually its not, because if I don’t like that, I have to write same code to all model which will seed. Ok maybe not same exactly but similiar codes – icsarisakal Sep 20 '22 at 09:21
  • "Actually its not" -- What's not? – Jonathan Hall Sep 20 '22 at 09:27
  • `json.Unmarshal(fileByte, &model)` here `model` is an **interface type** called `globals.Seeder`, so then `&model` is `*globals.Seeder`, i.e. a pointer-to-interface. Unmarshaling into pointers-to-interfaces is usually the wrong approach. And given that the snippet above that passes non-pointer values to the `globals.Seeder` slice, it's not *usually* wrong, it's always wrong. – mkopriva Sep 20 '22 at 09:34
  • If what you want to do is to unmarshal different json files into different concrete structs what you should do is the following: Create the slice using pointers, e.g. `[]globals.Seeder{&m.AirportCodes{}, ...}` and then in the loop pass just `model` instead of `&model` to `json.Unmarshal`, and that's all you need to do. – mkopriva Sep 20 '22 at 09:37
  • 1
    This is an X-Y problem. You're asking how to do Y, whereas the real question is: how did you end up in situation X? You say you need to handle ~30 types, all broadly similar, but they all are gorm data models. Why exactly do you *think* you need a type switch? I'm 99% certain you don't, provided you rethink your approach a bit – Elias Van Ootegem Sep 20 '22 at 11:25
  • Guys (@EliasVanOotegem,@mkopriva,@Flimzy) I just trying to design correct structure. I want to design like laravel eloquent orm seeder system. HOW CAN I DO THIS DESIGN ON GOLANG? – icsarisakal Sep 20 '22 at 11:33
  • 1
    @icsarisakal everyone in these comments is trying to help you; shouting about it isn't necessary. You're not going to be able to make something that works like Laravel because Go and PHP are *very* different languages. As others have said, it's likely the underlying approach is fundamentally unsuitable, likely due to being inspired by a very different language; there is almost certainly a more Go-appropriate way to do this if you can provide more information. – Adrian Sep 20 '22 at 13:58
  • @Adrian I apologize for my shouting. I can't find any solving for this problem and got angry. What is the best way for seeding? – icsarisakal Sep 20 '22 at 15:11
  • 1
    @icsarisakal: As others have pointed out, part of your premise is flawed in that you're trying to get golang to behave like a _PHP framework_. That said, a quick google search came up with [this](https://github.com/randree/gormseeder). A gorm db seeder tool. You can either use that, or take inspiration from that particular implementation. – Elias Van Ootegem Sep 22 '22 at 09:44
  • @EliasVanOotegem Thanks for your suggestion, I have seen before this package but I didn't want to depend on a package for seeding. I didn't analyze her/his codes but I am going to look right now. You guys stand up for Go and PHP are not the same, I know that but Php logic seem more efficient to me. Maybe after I look at this package, my mind may change. Thanks a lot. – icsarisakal Sep 22 '22 at 12:46
  • @icsarisakal I spent years writing PHP for a living, and was doing that as my main job when I first started using golang (version 1.4, so ~8 years ago). I get that things seem more efficient (or at least more sensible) in the language you're most familiar with. However, as someone who not only wrote PHP, but developed a couple of PHP extensions (in plain old C), I can confidently say that this is an understandable misconception due to a lack of experience with go vs an abundance of experience with PHP. – Elias Van Ootegem Sep 22 '22 at 14:53
  • And to be clear: I'm not saying this to be condescending, patronising, demeaning, or rude in any way shape or form. I've been there, too. I've seen others transition from languages more akin to PHP (like Perl, Python, or even JavaScript) go through the same process. It's normal to try and map the language you know onto the new one you're picking up, but it's important to keep in mind that a 1-to-1 translation is always sub-optimal – Elias Van Ootegem Sep 22 '22 at 14:55

1 Answers1

1

After some back and forth in the comments, I'll just post this minimal answer here. It's by no means a definitive "this is what you do" type answer, but I hope this can at least provide you with enough information to get you started. To get to this point, I've made a couple of assumptions based on the snippets of code you've provided, and I'm assuming you want to seed the DB through a command of sorts (e.g. your_bin seed). That means the following assumptions have been made:

  1. The Schemas and corresponding models/types are present (like AirportCodes and the like)
  2. Each type has its own source file (name comes from Seed() method, returning a .json file name)
  3. Seed data is, therefore, assumed to be in a format like [{"seed": "data"}, {"more": "data"}].
  4. The seed files can be appended, and should the schema change, the data in the seed files could be changed all together. This is of less importance ATM, but still, it's an assumption that should be noted.

OK, so let's start by moving all of the JSON files in a predictable location. In a sizeable, real world application you'd use something like XDG base path, but for the sake of brevity, let's assume you're running this in a scratch container from / and all relevant assets have been copied in to said container.

It'd make sense to have all seed files in the base path under a seed_data directory. Each file contains the seed data for a specific table, and therefore all the data within a file maps neatly onto a single model. Let's ignore relational data for the time being. We'll just assume that, for now, the data in these files is at least internally consistent, and any X-to-X relational data will have to right ID fields allowing for JOIN's and the like.


Let's start

So we have our models, and the data in JSON files. Now we can just create a slice of said models, making sure that data that you want/need to be present before other data is inserted is represented as a higher entry (lower index) than the other. Kind of like this:

seederModelList = []globals.Seeder{
    m.AirportCodes{}, // seeds before Term
    m.Term{},         // seeds after AirportCodes
}

But instead or returning the file name from this Seed method, why not pass in the connection and have the model handle its own data like this:

func (_ AirportCodes) Seed(db *gorm.DB) error {
    // we know what file this model uses
    data, err := os.ReadFile("seed_data/airport_codes.json")
    if err != nil {
        return err
    }
    // we have the data, we can unmarshal it as AirportCode instances
    codes := []*AirportCodes{}
    if err := json.Unmarshal(data, &codes); err != nil {
        return err
    }
    // now INSERT, UPDATE, or UPSERT:
    db.Clauses(clause.OnConflict{
        UpdateAll: true,
    }).Create(&codes)
}

Do the same for other models, like Terms:

func (_ Terms) Seed(db *gorm.DB) error {
    // we know what file this model uses
    data, err := os.ReadFile("seed_data/terms.json")
    if err != nil {
        return err
    }
    // we have the data, we can unmarshal it as Terms instances
    terms := []*Terms{}
    if err := json.Unmarshal(data, &terms); err != nil {
        return err
    }
    // now INSERT, UPDATE, or UPSERT:
    return db.Clauses(clause.OnConflict{
        UpdateAll: true,
    }).Create(&terms)
}

Of course, this does result in a bit of a mess considering we have DB access in a model, which should really be just a DTO if you ask me. This also leaves a lot to be desired in terms of error handling, but the basic gist of it would be this:

func main() {
    db, _ := gorm.Open(mysql.Open(dsn), &gorm.Config{}) // omitted error handling for brevity
    seeds := []interface{
        Seed(*gorm.DB) error
    }{
        model.AirportCodes{},
        model.Terms{},
        // etc...
    }
    for _, m := range seeds {
        if err := m.Seed(db); err != nil {
            panic(err)
        }
    }
    db.Close()
}

OK, so this should get us started, but let's just move this all into something a bit nicer by:

  1. Moving the whole DB interaction out of the DTO/model
  2. Wrap things into a transaction, so we can roll back on error
  3. Update the initial slice a bit to make things cleaner

So as mentioned earlier, I'm assuming you have something like repositories to handle DB interactions in a separate package. Rather than calling Seed on the model, and passing the DB connection into those, we should instead rely on our repositories:

db, _ := gorm.Open() // same as before
acs := repo.NewAirportCodes(db) // pass in connection
tms := repo.NewTerms(db) // again...

Now our model can still return the JSON file name, or we can have that as a const in the repos. At this point, it doesn't really matter. The main thing is, we can have the actual inserting of data done in the repositories.

You can, if you want, change your seed slice thing to something like this:

calls := []func() error{
    acs.Seed, // assuming your repo has a Seed function that does what it's supposed to do
    tms.Seed,
}

Then perform all the seeding in a loop:

for _, c := range calls {
    if err := c(); err != nil {
        panic(err)
    }
}

Now, this just leaves us with the issue of the transaction stuff. Thankfully, gorm makes this really rather simple:

db, _ := gorm.Open()
db.Transaction(func(tx *gorm.DB) error {
    acs := repo.NewAirportCodes(tx) // create repo's, but use TX for connection
    if err := acs.Seed(); err != nil {
        return err // returning an error will automatically rollback the transaction
    }
    tms := repo.NewTerms(tx)
    if err := tms.Seed(); err != nil {
        return err
    }
    return nil // commit transaction
})

There's a lot more you can fiddle with here like creating batches of related data that can be committed separately, you can add more precise error handling and more informative logging, handle conflicts better (distinguish between CREATE and UPDATE etc...). Above all else, though, something worth keeping in mind:

Gorm has a migration system

I have to confess that I've not dealt with gorm in quite some time, but IIRC, you can have the tables be auto-migrated if the model changes, and run either custom go code and or SQL files on startup which can be used, rather easily, to seed the data. Might be worth looking at the feasibility of that...

Elias Van Ootegem
  • 74,482
  • 9
  • 111
  • 149