Here's an example of getting a Sentinel-2 Image from Google Earth Engine
import ee
# Sentinel 2 Image Collection
s2_image_collection = ee.ImageCollection("COPERNICUS/S2_SR")
# Defining bounds
sls_latitude = 37.424160
sls_longitude = -122.167951
sls_polygon = ee.Geometry.Polygon(
coords=[
ee.Geometry.Point(-122.170884, 37.446027),
ee.Geometry.Point(-122.152749, 37.430515),
ee.Geometry.Point(-122.174323, 37.419214),
ee.Geometry.Point(-122.189342, 37.429405),
]
)
bounds = sls_polygon
start = ee.Date('2019-10-01');
finish = ee.Date('2019-12-31');
filteredCollection = s2_image_collection.filter(ee.Filter.lt("CLOUDY_PIXEL_PERCENTAGE", 15))\
.filterBounds(bounds)\
.filterDate(start, finish)
# Get First Image in Collection
image = filteredCollection.first().clip(bounds)
Now, to convert the image to an "Array Image", where each pixel is an array of band values, I'm following the instructions here: https://developers.google.com/earth-engine/arrays_array_images#array-images.
# Now select the bands of interest
image_select = image.select(['B4', 'B3', 'B2']);
# Make an Array Image, with a 1-D Array per pixel.
arrayImage1D = image_select.toArray();
# Make an Array Image with a 2-D Array per pixel, 6x1.
arrayImage2D = arrayImage1D.toArray(1);
And then when I use getInfo()
, I get:
arrayImage1D.getInfo()
{'type': 'Image',
'bands': [{'id': 'array',
'data_type': {'type': 'PixelType',
'precision': 'int',
'min': 0,
'max': 65535,
'dimensions': 1},
'dimensions': [324, 299],
'origin': [7174, 5532],
'crs': 'EPSG:32610',
'crs_transform': [10, 0, 499980, 0, -10, 4200000]}],
'properties': {'system:footprint': {'type': 'Polygon',
'coordinates': [[[-122.17088399999999, 37.446027],
[-122.189342, 37.429405],
[-122.174323, 37.419214],
[-122.15274900000004, 37.430515],
[-122.17088399999999, 37.446027]]]}}}
So now my questions:
- Why are the dimensions showing [324, 299] for
arrayImage1D
? Shouldn't that be a one-dimensional array? - How do I actually see the values for this array? My initial thought was to convert to numpy array, but maybe that's not actually what I want to do?
- The thing I'm trying to actually do is start with an image collection which has lots of images over the course of a year for the same location. I want to take the diff of each image temporally. So in other words, for an image on 2019-01-05, get each pixel's array of band values and take the diff compared to the image for the same location on 2019-01-01. Repeat this throughout the year and store the distribution of diffs. What's the best way to do this? Would it be to convert to numpy array or should I use
TFRecord
(https://developers.google.com/earth-engine/tfrecord; Exporting image array to TFRecord in Google Earth Engine)?