3 - Working with points¶
In this notebooks we use gplately to manipulate and reconstruct point data.
gpts = gplately.Points(
plate_reconstruction, # plate reconstruction model
lons, # list or numpy array of longitudinal coordinates
lats, # list or numpy array of latitudinal coordinates
time=0, # time is set to the present day by default
plate_id=None # optionally pass an array of pre-determined plate IDs
)
In this example, we will reconstruct data from the Paleobiology Database (PBDB). This data is in csv format.
import pandas as pd
import numpy as np
import gplately
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from plate_model_manager import PlateModelManager
We first download Müller et al. (2019) plate reconstruction model files to use in this Notebook, and set up the PlateReconstruction
and PlotTopologies
objects (call them model
and gplot
) from these model files.
pm_manager = PlateModelManager()
muller2019_model = pm_manager.get_model("Muller2019", data_dir="plate-model-repo")
rotation_model = muller2019_model.get_rotation_model()
topology_features = muller2019_model.get_topologies()
static_polygons = muller2019_model.get_static_polygons()
coastlines = muller2019_model.get_layer('Coastlines')
continents = muller2019_model.get_layer('ContinentalPolygons')
COBs = muller2019_model.get_layer('COBs')
model = gplately.PlateReconstruction(rotation_model, topology_features, static_polygons)
gplot = gplately.PlotTopologies(model, coastlines=coastlines, continents=continents, COBs=COBs)
Download and import PBDB data¶
We can import data from the PBDB using the data url straight into pandas
. Alternatively, we can download the csv file from their website and import that.
For importing csv files: it is often easier if the first row is the column name, although pandas
does allow you to skip these header rows if needed.
Conveniently, the PBDB provides an option when downloading data to exclude the metadata at the beginning of the file.
# download data for the Late Cretaceous, and inclue the paleoenvironment column.
# You can use the download page to play with the options and get the download link and/or CSV.
pbdb_data_url = 'https://paleobiodb.org/data1.2/occs/list.csv?datainfo&rowcount&base_name=Foraminifera&interval=Jurassic&show=coords,env'
## import from the URL
pbdb_data = pd.read_csv(pbdb_data_url, sep=',', skiprows=18)
pbdb_data.columns
Index(['occurrence_no', 'record_type', 'reid_no', 'flags', 'collection_no', 'identified_name', 'identified_rank', 'identified_no', 'difference', 'accepted_name', 'accepted_rank', 'accepted_no', 'early_interval', 'late_interval', 'max_ma', 'min_ma', 'reference_no', 'lng', 'lat', 'environment'], dtype='object')
# Set up a GeoAxis plot
fig = plt.figure(figsize=(16,12), dpi=100)
ax = fig.add_subplot(111, projection=ccrs.Mollweide(central_longitude = 0))
ax.set_global()
ax.gridlines(color='0.7',linestyle='--', xlocs=np.arange(-180,180,15), ylocs=np.arange(-90,90,15))
ax.set_title("Present day distribution of Jurassic Foraminifera")
# Plot shapefile features, subduction zones and MOR boundaries at 0 Ma
gplot.time = 0 # Ma
gplot.plot_continent_ocean_boundaries(ax, color='b', alpha=0.05)
gplot.plot_continents(ax, facecolor='palegoldenrod', alpha=0.2)
gplot.plot_coastlines(ax, color='DarkGrey')
gplot.plot_ridges_and_transforms(ax, color='red')
gplot.plot_trenches(ax, color='k')
gplot.plot_subduction_teeth(ax, color='k')
sc = ax.scatter(pbdb_data['lng'], pbdb_data['lat'], color='orange',
transform=ccrs.PlateCarree(), label='Jurassic Foraminifera')
ax.legend(frameon=False)
<matplotlib.legend.Legend at 0x157333c90>
Reconstruct PBDB data with GPlately¶
Use the lon, lat coordinates and mean age of the PBDB data.
The Points
object needs the PlateReconstruction
object as a parameter.
We can create the PlateReconstruction
object with a rotation_model
, topology_features
and some static_polygons
, which we can get using GPlately's DataServer
object. Let's get these files from Müller et al. 2019 and call the DataServer object gdownload
.
gpts = gplately.Points(model, pbdb_data['lng'], pbdb_data['lat'])
reconstruction_time = np.mean(0.5*(pbdb_data['min_ma'] + pbdb_data['max_ma']))
rlons, rlats = gpts.reconstruct(reconstruction_time, return_array=True)
# Set up a GeoAxis plot
fig = plt.figure(figsize=(16,12), dpi=100)
ax = fig.add_subplot(111, projection=ccrs.Mollweide(central_longitude = 0))
ax.set_global()
ax.gridlines(color='0.7',linestyle='--', xlocs=np.arange(-180,180,15), ylocs=np.arange(-90,90,15))
ax.set_title("Reconstructed locations of Jurassic Foraminifera")
# Plot shapefile features, subduction zones and MOR boundaries at 0 Ma
gplot.time = reconstruction_time # Ma
gplot.plot_continent_ocean_boundaries(ax, color='b', alpha=0.05)
gplot.plot_continents(ax, facecolor='palegoldenrod', alpha=0.2)
gplot.plot_coastlines(ax, color='DarkGrey')
gplot.plot_ridges_and_transforms(ax, color='red')
gplot.plot_trenches(ax, color='k')
gplot.plot_subduction_teeth(ax, color='k')
sc = ax.scatter(rlons, rlats, color='orange',
transform=ccrs.PlateCarree(), label='Jurassic Foraminifera')
ax.legend(frameon=False)
<matplotlib.legend.Legend at 0x151457910>
We can make this map look a little bit nicer by condensing data that are close to each other. One way is to bin data by longitude/latitudinal grid cell. Alternatively we can use stripy
to create an icosohedral mesh which has relatively uniform point spacing.
import stripy
mesh = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=5, tree=True)
distance, indices = mesh.nearest_vertices(np.deg2rad(rlons), np.deg2rad(rlats))
uindices, ucount = np.unique(indices, return_counts=True)
ulons = np.rad2deg(mesh.lons[uindices])
ulats = np.rad2deg(mesh.lats[uindices])
# Set up a GeoAxis plot
fig = plt.figure(figsize=(10,12), dpi=300)
ax = fig.add_subplot(111, projection=ccrs.Mollweide(central_longitude = 0))
ax.set_global()
ax.gridlines(color='0.7',linestyle='--', xlocs=np.arange(-180,180,15), ylocs=np.arange(-90,90,15))
ax.set_title("Reconstructed locations of Jurassic Foraminifera")
# Plot shapefile features, subduction zones and MOR boundaries at 0 Ma
gplot.time = reconstruction_time # Ma
# gplot.plot_continent_ocean_boundaries(ax, color='b', alpha=0.05)
# gplot.plot_continents(ax, facecolor='palegoldenrod', alpha=0.2)
gplot.plot_coastlines(ax, color='DarkGrey')
gplot.plot_ridges_and_transforms(ax, color='red')
gplot.plot_misc_transforms(ax, color='red')
gplot.plot_trenches(ax, color='k')
gplot.plot_subduction_teeth(ax, color='k')
mask_interval = np.ones_like(ucount, dtype=bool)
sc = ax.scatter(ulons, ulats, s=50+ucount, color='DarkOrange', edgecolor='k', alpha=0.5,
transform=ccrs.PlateCarree(), label='Jurassic Foraminifera', zorder=10)
handles, labels = sc.legend_elements(prop="sizes", num=5, color='DarkOrange', markeredgecolor='k')
ax.legend(handles, labels, loc="upper right", title="Number of\nForaminifera", labelspacing=3, handletextpad=2,
bbox_to_anchor=(1.2,1.05), frameon=False)
fig.savefig("Reconstruct_Jurassic_Foraminifera.pdf", bbox_inches='tight')
Add feature attributes¶
Adding attributes to each point can be done seamlessly using the add_attributes
method by supplying keyword-value pairs. Some key attributes that can easily be read by GPlates include:
- FROMAGE: the 'from' age specifies the oldest limit the data was active
- TOAGE: the 'to' age specifies the youngest limit the data was active
- PLATEID: the plate ID
Below, we add FROMAGE and TOAGE attributes to the Points
object and save to a GPML file which can be directly read by GPlates.
gpts.add_attributes(FROMAGE=pbdb_data['max_ma'],
TOAGE=pbdb_data['min_ma'])
# save to file
gpts.save("pbdb_data.csv")
gpts.save("pbdb_data.gpml")