A Strategic Approach to Custom Migrations

Clare, Asaph and Adam discuss some higher-level techniques for handling more complicated migration use cases in Drupal 8 in addition to methods for troubleshooting migrations.

Transcript

[00:00:00] Adam Zimmermann: To custom migrations. I'm Adam Zimmermann, software architect here at Chromatic. Clare, you want to introduce yourself quick?

[00:00:10] Clare Ming: Sure. Hi, I'm Clare. I am a senior developer with Chromatic.

[00:00:16] Adam: Asaph.

[00:00:17] Asaph Kotzin: Hey, I'm Asaph Kotzin. I am as well senior back-end developer at Chromatic.

[00:00:23] Adam: All right. We'll just dive right in. We're going to be talking about migrations at a higher level, strategic approaches to them, but also it's going to assume a little bit of familiarity with some of the technical things using the Migrate API which you're both-- some PHP and object-oriented concepts, a little bit of Drush and module development and a little bit of configuration management, but hopefully, those will kind of be ancillary to the main points we're going to talk through. First, I'll hand it to Asaph who's going to talk about the content pyramid.

[00:01:01] Asaph: Thank You, Adam. Hey, everyone. When we approach a complex migration, the first thing we need to ask ourselves is how are we going to structure all of this data. One side is, how is it structured right now? The other side is, how do we want it structured in the new platform? We're left with the part in between. To make a little bit more sense out of all of this, I called it the content pyramid. Adam, can you pass the slide, please. Thanks.

What is content made of? When we think of content, we mostly think about a node that the user views on a website. This content, this node is actually built of many different pieces. It's constructed of structured data. This can be paragraphs, it can be field collections. It might be what we call utility nodes, which are nodes that their only purpose is to enrich a different node, but all of this structural data is part of this node.

Below that, we have all of our media; images, files, attachments. This can be actual files that move to our new deployed server and system or this can also be third-party services that we incorporate into the system as media entities. I'm not going to get into much details about media, Clare is going to give you a very good presentation on that part. Below the media, we're looking about users.

In Drupal, almost everything requires a user. Either it's an author, a creator, an owner, we need users. That's what content is made of. Next slide, please. When we go to migrate it, we need to reverse this pyramid. We need to start from the basics, from the users, mostly the editors, but it's a good step to move in all of our users. From there, we can build up. Go to the files. From files, we build up the media entities.

Then we throw in all of the terms, taxonomy terms that we're going to use. It's important to add them at this stage because some of them could use medias and most of them are used by paragraphs and nodes above them. Once we've done with terms, we continue up the ladder down the pyramid into our metadata parts, the paragraphs, the field collections, and all of the utility nodes.

In some cases, we're going to have dependencies between them as well, so it's important to map these dependencies out and understand what has to come first and how do we layer this data upon each other. Once we have all of this metadata together, we're actually ready to migrate the nodes themselves. After we migrate the nodes, many times we find ourselves needing to run small scripts to fix attachments or references, to add redirects, metadata path aliases, all of this stuff that comes at the end when content is actually ready and we can work with it. Next, please.

If we look at all of this, we can identify that we're actually looking at four different groups of data. The first group is users. It's a very straightforward group for migrations. Even complex systems in most cases have a rather simple structure for users, and I don't think that needs much explanations. Files and media, again, I don't want to go too deep into those, but since they are a base for all of the content, we really need to take care of them before we continue to the rest.

Then we go to the group of metadata. Everything that is not directly presented to the user in itself but always presented as part of content. We have to make sure we have all of this ready before we go ahead and migrate the actual content into the system. That's our four groups of data that we're looking at. Next, please. Okay, but why? Why is it important to look at all of these data and break it into groups and really understand how we're going to approach all of it. Once we break this into independent groups and scripts, it's much easier to model even complex data. Many times the source of this data can come from different sources. In very complex migrations, we sometimes have the need to unite data that comes from different databases into a single entity at the end.

Once we split up all of this into these groups, we're able to much easier understand how we're going to build the layered structures for all of this. Another important benefit is the easy rollbacks. It can get tempting to write a migration script that will create a node with all of its paragraphs and references and images and everything in one go but then rolling it back is very difficult. It's extremely difficult to debug it to understand what's going on and to-- For example, if we have one error in one field, we suddenly need to roll back the entire node instead of just being able to go and change what we need and step back out.

That brings to the other benefit, a layered progression of the actual content. Once we start looking at these groups and building these layers, it's easier for us as developer as well for all the other stakeholders of a project to understand the data better. It's a great opportunity to look at it, understand what it's being used for. Many times at this stage, we'll find we can make changes to the structure. We'll find together with clients that this is a great opportunity to manipulate data a little bit, and this layered progression allows us to show a development as it goes, see the results, and then work on every little thing on its own.

Another huge benefit is, I like to call the last mile migration. It's very common that in complex migrations, we work with data that is still alive. We can migrate a legacy system, but this system is still being used. There is more data coming in all the time. Although we're working with static data, our migration approach needs to support the continuous integration of this data into the migration.

Once we layer and split the migration into many different smaller scripts, it's much easier to incorporate this updated data into our flow and update our content as it evolves. Until the final migration, which is usually done after a code freeze or a database freeze of a legacy system, then we run this last mile scripts. We don't need to change our scripts, we can continue using them.

Maybe the most important part of all of this is that it keeps things simple. Complex data is complex to work with. If we try to model all of it in a single YAML file, it's kind of like trying to put all of your business logic in one function. It helps to encapsulate this data to understand the different layers of it. Then when we look at teh wide YAML migration file of a specific field of a node, it's very easy to deal with it. Complex data becomes simple once we break it up into these groups, understand the different layers of them, and write very simple scripts for managing this data. Next, please.

How do we get this data? How do we work with complex data and run these migrations? My personal favorite approach is CSV files. I like it for many reasons but mostly because it's easy for everyone to work with. It doesn't depend on specific software or operating system. It is standard for everybody who work with it. It's also a format that is usually easy to produce from data whatever the source is, if it's XML's originally, if it's databases in different formats, most systems have an ability to produce the data to CSV file and work from them.

That is my favorite, XML will do just as well. Any other format that is supported by Migrate can work. The point is to get this data out of the legacy system and into a static file that we can work with. That's also important because that way we have less dependency on a data source. If you're working against live databases, things change and we don't know what's going on. It's kind of working with a black box, and when you migrate, you want to see the data that you're working with, so having the XML some files that we can look at easily, it helps a lot to understand what it is we're doing. Another important part is the standardization and the pre-processing power. It can be tempting to try to manipulate and process information as part of the migration script, and many times, we don't have any other option, we have to do that, but there's a lot of data that we can pre-process before we bring it into the system. If we're having CSV files, we can work with functions, we can do a lot of data manipulation if we look at some other data sources, and we can do a lot of work before it even reaches our migration script. We export all the data to static files, we pre-process them to get them as close as we can to the actual data format that we're going to use. Next, please.

Then comes the part of manipulation. We did our best. We tried everything we can using the CSVs, but there is also a limit to the power of Google Spreadsheet functions or Excel functions. We cannot do stuff that depends on other entities. We're not connected to the system that we're migrating into. There is a limit to what we can do. We still need to do manipulations on the data.

The first step is all of the core plugins that come with the Migrate API, and there are many. The link in this slide will take you to a page that will show you all of the core plugins and all the migrate_plus plugins for processing data as part of the YAML scripts. Next, please.

Even that sometimes is not enough. We still find ourselves with data that just doesn't fit the way we want it in the system. My best tip on this is using the callback plugin. It's a super simple plugin that allows you to call custom PHP functions. They need to be simple ones, but still it enables you to use a lot of functions on the data as part of the migration script. Sometimes even that is not enough.

That's where we go write custom processes. The example that you see here, the AppendToField is a custom processor that knows how to load a node and add an item to an existing field that might have data into it. That's something that's just not supported by the Migrate API. When we build these custom processors, we keep them generalized, reusable, and we find that many times a few processors can really give you everything you need after using all of the other plugins and callbacks and everything to manipulate the data in the way that you wanted and needed to import.

Just a few more things about what can we do with this data before we go into what is this data. Most of our migrations of complex data will depend on the plugin migration. Basically, it's a lookup that allows us to reference data that has already been migrated as part of that migration script. The most useful function of that is to override_properties part. Override_properties allows us to tell the migration script to only overwrite specific fields.

If we're coming in and attaching paragraphs to an already existing node, we want to make sure that the only thing we're changing is that specific field for the paragraph. Whenever we use migration scripts to add their existing content or entities, the override_properties is what's going to ensure us that we're not corrupting the entire node by updating a single item that actually did much more than what we expected.

Which brings me to the next of "More & Simple" is better than "Less & Complex". Simple migration scripts allows easy migration. Again, it's so tempting to try to get that migration done in one go, to write that YAML file that will build this complex node with 50 fields and images and references. It can work just as well as writing it in 10 different scripts, but when you have this built-in 10 different simple scripts, it's much easier to understand what just happened, to rollback specific parts, to make small changes, and to keep this data much more dynamic than it is when you try to do it all in one go.

Another important aspect, when we deal with complex data is standardization of date and time formats. I think every single migration I've ever done included custom dates and time formats. That's an integral part of every system, especially systems that end up being migrated. That's the base of data. It's a tremendous value to be able to standardize all of this before it even reaches the system. My personal favorite, Unix timestamps, convert from that to whatever you want when you migrate inside, if it's a date only, or date and time, but if you're able to get all of your dates and times into Unix timestamps, it saves a lot of headaches. While you're at it, try to standardize everything. Once you take out content from a system and put it in a file, many times you find patterns that you can fix. Many times the owner of the data understands it better and understands the relations better and is able to standardize things even more. That's the greatest opportunity to do it. Once the data is out of the system and before you're processing it is a great time to standardize everything.

Just one last part about multilingual migrations. Multilingual migrations are no different, but they do require that you plan well ahead. That's the same as building a multilingual Drupal website. There is nothing inherently different about it. If you don't plan for it well ahead and understand where you want to take it, it's going to be a headache. Content migration is no different. Drupal migrations supports the concept of original translation source, it understands how to add more content as translations of that. If you build your content pyramid in a layered structure of dependencies, you just need to make sure that your original source language is inserted before any other translations and everything is the same.

That's it for me on high-level how to look at your content. I pass it to Clare.

[00:16:22] Clare: Thanks, Asaph. In this next portion of the presentation, I wanted to zero in on some common use cases. The enormity of this topic and the constraint of our time given that, I had to pick and choose. This is more of a tactical on-the-ground approach to dealing with complex data structures such as media entities and then how to process some of your source data coming into your target application. Next line.

In the context of these two topics, I wanted to touch on with respect to migrating media entities, just talk about different source scenarios. The first being maybe you're coming from an earlier version of Drupal 8 or Drupal 7 instance working with file image, file upload fields, how to approach transforming those into media entities. Then also talk about data coming from non-Drupal data stores and specifically how to deal with inline embedded media references in your source system and how to bring those over. Sorry, can you go back a little bit?

Then lastly just an option for how to deal with this if you have maybe a simpler site and you don't maybe need the Migrate API to do this. Then lastly just a few words on how to merge multiple sources fields into a single entity as that is also a common scenario that you might encounter. Next slide.

Just a quick word about media in core, it's been in core since 8.4, and as most of us are probably familiar with, it's a really robust framework for media management, whether it's native media or coming from a third party. It's highly extensible. There's a rich ecosystem of contributed modules that support media in core. Of course, the objects that we're talking about are images, PDFs, social media embeds, and the like. The main takeaway is that media entities are standardized, full-fledged, fieldable entities. They make entity references to file entities and they themselves can be the object of entity reference fields from other node types. Next slide.

When you first start approaching how to migrate media, the advice across the board, and Asaph mention this too, is that you need to migrate your files first, and then once you do that, generate the media entities. Next slide.

The first order of business is to get all your source files into the managed file system. You can do that with a ready to go destination plug-in called entity:file. Here's a sample migration YAML file for a file migration and that will get all your files into Drupal. Next slide.

The second part of this process is then once you have your file entities ingested and imported, you can create your media entities out of them. There is also a ready to go destination plug-in called entity:media that will do this for you. As part of that process, you need to do a migration look up on the files migration that just occurred to make that link or that reference between the media entity and the file entity. That in summation is just the general process for getting media entities into a Drupal system. Next slide. In the first use case that I mentioned, the context that I wanted to talk about was say you have a Drupal 7 install, or even an earlier version of Drupal 8 prior to when media got into core and you're trying to figure out how to get all your file entities or your previous file fields into media entities, in the course of doing research for this presentation, I came across the two contributed modules that try to automate this process and really provide a way to do this process efficiently.

The two I want to mention are media migration and migrate file to media. There are some nuances between them, but I'll just touch on them lightly to talk about how you can use them. Next slide.

Media migration is a contributed project actually developed by our friends over at Lullabot. It provides a migration pathway for media between Drupal 7 and Drupal 8. One nuance about this that I think is really interesting is that it actually can also transform all your media WYSIWYG tokens into proper entity embeds in Drupal.

What this does is it takes-- it basically laverages Drush commands to automate these processes. You can run Drush migrate, upgrade pass in the key for the legacy database as well as the file system root of the legacy system with the configure only tag flag. You can export this configuration and then run your configuration and content migration. Next slide.

The first command here, the result of this, of importing the configuration entities will take care of actually taking whatever fields are attached to your Drupal 7 or Drupal 8 fields and attach them to the target bundle. Then once you have that configuration migration done, you can actually import those media entities using the content tag flag, and that will take care of importing your entities as well. Next slide.

The other contributed project that I wanted to mention and that a lot of people seem to have a success with is migrate file to media. Next slide. It also leverages Drush commands to automate these processes. You can run this Drush command, Drush migrate file media fields, passing the entity type, the bundle, the source field, the target media bundle, and it will generate your target media fields just by running this Drush command, which is very cool. Next slide. It also enables you to generate your migration YAML files just by running this command. When you do that-- next slide, it will launch a generator.

As a preface to this, you want to make sure you have a custom module set up already scaffolded. When you are writing migration scripts, you need a place to put them in your code, so we often package them in a custom module. This generator will just walk you through some basic questions like what's the machine name of your custom module? What's your source field? What's your target bundle? It will actually generate your migration YAML files and save them to the config installed directory of the custom module that you specify. Next slide.

One cool feature about this contributed project is that it does duplicate file detections, so by just running this migrant command with the migration ID of the images or files migration that you have going, it will create a binary hash of that file, store that in a table in the database. Again, you need to do this before you actually run your media import, but when you'd go ahead and run your media migration, it will double-check against that table and it will not import if it already exists. Next slide.

That covers a quick and dirty if you have your [inaudible 00:24:04] versus a Drupal application, but oftentimes during migrations, we're pulling in data from a non-Drupal data store. One common use case scenario that I wanted to address is say you have a WordPress article that you want to move into Drupal. In the legacy system, maybe editors have been adding inline image tags or social media embeds and so all those-- how do we approach migrating those articles with all these inline media references into Drupal? The approach is you're going to be using the same source as the attached to entity, and in our case, in our example of a WordPress article, it's the body field of that article, and we need to parse that body data and we can do that by creating a custom process plugin, and in that, we can use regular expressions to identify and target those inline references. Then process them to ultimately swap them out with whatever proper entity_embed code you want in the destination article body field. You can write custom methods to do the file lookups, generate the entities, and then ultimately create the embed market that you want to replace for that article body value. Next slide.

Here's an example from an article migration YAML file, where we're taking the body value of that article and passing it through some process plugins. Notably, the one we want to look at is a custom process plugin that we called entity_embed. Next slide. In here, you can see in the transform method, there are two regular expressions that are looking for matches. In the first case, we're looking for image shortcodes with IDs. In the second case, we're looking for social media embeds and tags. If there is a string match, it will pass that as a parameter to further processing methods, convert image markup and convert media respectably. We'll take a quick look inside of those. Next slide.

In convert image markup, if there's a match with a shortcode with an ID, it will do a lookup on that ID, see if that media entity exists. If it does, extract it's UUID by loading the media entity and then passing that as a parameter to the final method that will actually generate the embed markup. Next slide.

In the case of social media embeds, the convert media method will also run a regular expression to extract the URL. It'll check in to see if it exists, and then determine the type of media. The three that we're concerned about here are YouTube, Instagram, and Twitter. If we have the URL and type, we can pass that to a final method to actually generate the media embed markup. Next slide.

What those do, ultimately, is return the actual embedded entities. Here we have the actual legit markup that we will replace in the article body of the former inline references with the HTML embed codes on the destination side for those article body values. Next slide.

The third contest I wanted to also mention in terms of dealing with migrating entities is, maybe you have a site or an application and it feels a little bit heavy-handed to have to do all the work with migration modules. I'm making a shameless plug to Adam who wrote this article a while back, about how to do that without Migrate API. I highly encourage anyone, if this matches your use case, to go check that out. It basically recommends using hook updates to generate media entities and then to delete those file fields subsequently. Next slide.

Then lastly, I just wanted to touch on how to deal with multiple source fields. This is a common scenario, where when you're doing migrations, it's often an opportunity to streamline and update your content model. Consolidate objects that were maybe all over the place in the legacy system. We want to bring those all into a single entity in the target system. You can do that. You just have to make sure you choose your primary source wisely. Then you can write a custom source plugin to extract further data from other places in your data store and set those as additional source properties in the plugin. Next slide.

Here's an example of a custom source plugin of an article where inside the prepare-row method, you can run multiple queries to set your source properties. In this case, we want to find other taxonomy terms that might be added to this article or other attachments. We also want to set the author and set the meta-tags for this article. You can see inside here, on the row, there's a set source property that allows you to do this. You can set the actual property and then derive that value from whatever the results of your query are. Now that will be available in your migration YAML file. Next slide.

Here's another example. This is, again, from an example from an article that Lullabotwrote about merging entities. In this instance, the context for this is that it's overriding a custom or the default taxonomy plugin and merging another entities fields with it. Running a query, joining on the term ID, and if there's a match, it will take the fields of the entity, which is not being migrated, it's being added to the term migration. It will set the source property of that entity's fields and with this gap fields values property or method apply the values for that as well. I know that was a lot of information for a short amount of time, but that's some approaches that we can take for dealing with complex data structures and requirements. With that, I'll pass it over to Adam who's going to talk about how to debug and troubleshoot migrations effectively.

[00:30:41] Adam: Now that we've written our migrations and [inaudible 00:30:44] the problems, we're going to talk about some tricks to get them to run faster and understand what's happening. First, there's thre's this cool migrate status module. Many times in code, you have entity hooks, and you might not want all those to run during a migration. For example, you're migrating, I use from other field values, and you might not need to do that because the migration does that for you. Or maybe you're making an API call on entity save and you don't want to make a couple thousand API calls within a couple seconds and blow up your API limit.

This is a clean way to just detect if the migration's running and adjust to that in your entity hooks. Nice simple utility module.

Limits, this is another great tool when you're writing a migration. You might only need to do one or two to prove out your theory to see if your migration is going to work, and that's great. You can use this flag. It should be known that this does not actually apply a limit to your source query. If you're using sequel, there is no limit on that query. It's still grabbing all those rows and it still calls Prepare Row on every single source record.

If you have subsequent queries in Prepare Row like Clare was just talking about, those are all still going to run even if you have limit one. That can quickly cause it to slow down or in some cases just cause the migrate queries not even be runnable as it's just simply trying to do too much. [inaudible 00:32:24] through sequel migration hidden in the source class, not really hidden because that's the beauty of open-source, you can go see exactly how it's working and understand what's going on and also make changes to it too, submit back some patches.

You can adjust your batch size whether that's through your source plugin definition and your migration YAML file or just setting it as a property in your source plugin class if you're using a custom plugin. You going to set this to whatever value you want and then it puts a limit on your query and allows it to run or run a lot more quickly.

Some other options. There's all sorts of ways to define how joins are done in the query. Set your database target. There's all sorts of stuff you can do here, it's just cool to go exploring in the source classes of the migration module and understand what it does and what all your options are. Sometimes it's a bit opaque when you're just in a YAML file trying to figure out what to put there and looking at examples online.

Next, migration speed. I'm sure we've all written migrations and tested them with a few items and thought we were good to go and then we try to run the whole thing and realize things are not working as intended. I've found that the first when you started migration items go a lot more quickly and then as it keeps going, the speed will slow down considerably. Eventually, sometimes they'll crash and doesn't fully reclaim the memory usage like it's claiming to do. There are issues on drupal.org about this, but I don't believe they're solved yet. In the interim, we have some tricks.

It should be known that I originally found this trick from a media current article and then I used it and wrote my own article and then someone else even took that and ran with it even more. The trick is you run and drush migrate:import with that limit flag and possibly if with the batch size too, you can combine both of those together. Then you just keep calling it in a loop and a shell script with a low limit value and that keeps your average item processing time much lower and allows the migration to finish without having memory issues. This link shows the script that we used. It's not ideal because it's a brittle shell script, but sometimes that's what it takes to get the migration done. It's also nice because you can have the migration sequence and ordering a software I was talking about committed in a shell script and your repo. There are benefits to this too, even if you don't do a loop.

This is the guy that took up the script idea and expanded upon it even more. He wrote migrate import batch that does all that magic of that shell script and combines it hidden into this command, which is really nice. It does seem to only work with CSV files, as he actually splits out the file and makes separate ones, but this is pretty cool that you can just pass it this, and then in the background, it does everything it needs to do to run the migration without having memory issues.

Migrate Devel. This is a fantastic module that actually worked a long time ago, and only recently began working with Drush 9 as of June 12th. [unintelligible 00:36:04] just got committed, so that's great. You can have the migrate-debug and migrate-debug-pre. You pass that to the migrate-import and it spits out exactly what's coming in and what's going out. It's a great tool to help you understand what's happening, as sometimes it can be very opaque.

You might run a migration with 10 items and everything works, and then you run it for 50 and you run into some bad data. You're like, “Why didn't this work? It was working on the other 10.” This helps you see what's coming in and what's going out and adjust your processing to be as robust as needed to solve everything. This is also when you can easily see those timestamps and dates and get those all cleared up like Asaph was talking about.

There's all sorts of great resources on this. There's stuff on drupal.org, there's stuff that talks about how to use the callback plugin with var_dump, as Asaph was also talking about using the callback plugins. Is a simple one to just call any PHP function. Of course, you can call var_dump. Then you can use xdebug. Some of these posts talk you through that. All sorts of tips and tricks that I encourage you guys to check out some of these articles as there's just too much to cover in this short time. With that, I will open up the floor to questions. Anyone have any questions? [pause 00:37:36]

[00:37:47] Mark: What's still missing from the tooling? What items or types of problems do you still regularly run into?

[00:38:06] Adam: I think a good way to solve the memory issues without resorting to custom, using these limits and hacky things like a SQL-based migration would be great. Clare or Asaph, any thoughts from you on that?

[00:38:30] Clare: In terms of the tooling for debugging?

[00:38:33] Adam: Just in general? I think Mark's question is generic, just anything that would make the whole migration process better.

[00:38:41] Clare: Again, there's a lot out there in contrib that's making a lot of headway to automate some of these processes. Specifically, with the media entities, some of these modules that are coming out earlier this year, or last year to try to make it more efficient so that you're not writing migration scripts forms. It's hard to cover every use case, but for stuff where you have straightforward entities that need to come over but that are not handled by either migrate-upgrade or some of these other ones, I think there's just a lot out there that is trying to solve this problem.

I think again, every migration is custom, so there's always going to be things that you might have to write custom code with. I think the tools are getting better, and I think there's more coming out every day to try to automate some of the stuff. They're looking really promising. I'm trying to think of ones off the top of my head that cover most things, but then maybe don't cover some high cases. I think about that for a minute.

[00:40:00] Adam: Great. I did most of the migrations I did a couple years ago and I know just recently I've seen so many cool modules that make very difficult things a lot easier. They wrote some of the plugins that I wrote but they took them to the next level and expanded on them. It's cool that if you're struggling with something, I'd encourage everyone to just look out of the contract because there's some really great tools out there to aid in the migration process.

[00:40:33] Mark: With regard to the limit command not actually limiting things on the source, that's been the case for a long time I feel like. Would you consider that a bug or a feature at this point or just something that's not able to be resolved?

[00:40:53] Adam: That's a good question. I don't know. I'm guessing there's a good reason maybe that it is the way it is because sometimes you do need to -- you might need to gather all the source records to do something. I can't try to remember the use case I had for this one time, but you had to get everything loaded up to do and then check things against each other for a remapping of taxonomy terms or something I did in a migration once. I don't know.

[00:41:30] Mark: Maybe that if you're not importing an entire set, and you need to filter somehow you can't just grab ten if maybe the result out of those ten would be zero. You have to grab the whole thing and then prepare the rows. I'm guessing maybe it's something related to that.

[00:41:54] Adam: Just as long as you're aware of it, I think that's the important thing and if you want it you could always write your own sequel-based plugin that reacted to that flag differently and did respect it in some way. That's the beauty of open source.

[00:42:13] Mark: I think we're going to say it's a feature.

[00:42:14] Adam: Yes, I'm sticking too because I don't know why whoever built it did it that way. Any other questions? Clare, Asaph, any last words of wisdom? All right.

[00:42:41] Mark: Will you be able to add the links that you shared? Can we add those to the video description for folks that might see this later?

[00:42:50] Adam: Sure, we can get those added. All right, well, thank you so much for joining us. We'll get all these links added to the video description and hopefully this helps you in your next migration. Have a good one.

[silence]

[00:43:31] [END OF AUDIO]