Just for testing I downloaded a PNG file via http (in this case from a JIRA server via API)
For the http request I have a quite "standart" class HttpFileManager I just add for completeness:
public static class HttpFileManager
{
public void DownloadImage(string url, Action<Texture> successCallback = null, Credentials credentials = null, Action<UnityWebRequest> errorCallback = null)
{
StartCoroutine(DownloadImageProcess(url, successCallback, credentials, errorCallback));
}
private static IEnumerator DownloadImageProcess(string url, Action<Texture> successCallback, Credentials credentials, Action<UnityWebRequest> errorCallback)
{
var www = UnityWebRequestTexture.GetTexture(url);
if (credentials != null)
{
// This simply adds some headers to the request required for JIRA api
// it is not relevant for this question
AddCredentials(www, credentials);
}
yield return www.SendWebRequest();
if (www.isNetworkError || www.isHttpError)
{
Debug.LogErrorFormat("Download from {0} failed with {1}", url, www.error);
errorCallback?.Invoke(www);
}
else
{
Debug.LogFormat("Download from {0} complete!", url);
successCallback?.Invoke(((DownloadHandlerTexture) www.downloadHandler).texture);
}
}
public static void UploadFile(byte[] rawData, string url, Action<UnityWebRequest> successcallback, Credentials credentials, Action<UnityWebRequest> errorCallback)
private static IEnumerator UploadFileProcess(byte[] rawData, string url, Action<UnityWebRequest> successCallback, Credentials credentials, Action<UnityWebRequest> errorCallback)
{
var form = new WWWForm();
form.AddBinaryData("file",rawData,"Test.png");
var www = UnityWebRequest.Post(url, form);
www.SetRequestHeader("Accept", "application/json");
if (credentials != null)
{
// This simply adds some headers to the request required for JIRA api
// it is not relevant for this question
AddCredentials(www, credentials);
}
yield return www.SendWebRequest();
if (www.isNetworkError || www.isHttpError)
{
Debug.LogErrorFormat("Upload to {0} failed with code {1} {2}", url, www.responseCode, www.error);
errorCallback?.Invoke(www);
}
else
{
Debug.LogFormat("Upload to {0} complete!", url);
successCallback?.Invoke(www);
}
}
}
Later in my script I do
public Texture TestTexture;
// Begin the download
public void DownloadTestImage()
{
_httpFileManager.DownloadImage(ImageGetURL, DownloadImageSuccessCallback, _credentials);
}
// After Download store the Texture
private void DownloadImageSuccessCallback(Texture newTexture)
{
TestTexture = newTexture;
}
// Start the upload
public void UploadTestImage()
{
var data = ((Texture2D) TestTexture).EncodeToPNG();
_httpFileManager.UploadFile(data, ImagePostUrl, UploadSuccessCallback, _credentials);
}
// After Uploading
private static void UploadSuccessCallback(UnityWebRequest www)
{
Debug.Log("Upload worked!");
}
In resume the problem lies in the for and back converting in
(DownloadHandlerTexture) www.downloadHandler).texture
and
((Texture2D) TestTexture).EncodeToPNG();
The result looks like this
On the top the original image; on the bottom the re-uploaded one.
As you can see it grows from 40kb
to 59kb
so by the factor 1,475
. The same applies to larger files so that a 844kb
growed to 1,02Mb
.
So my question is
Why is the uploaded image bigger after EncodeToPNG()
than the original image?
and
Is there any compression that could/should be used on the PNG data in order to archive the same compression level (if compression is the issue at all)?
First I thought maybe different Color depths but both images are RGBA-32bit
Update
Here are the two images
original (40kb) (taken from here)
re-uploaded (59kb)
Update 2
I repeated the test with a JPG file and EncodeToJPG()
and the result seems to be even worse:
On the top the original image; on the bottom the re-uploaded one.
This time it went from 27kb
to 98kb
so factor 2,63
. Strangely the filesize also was constant 98kb
no matter what I put as quality
parameter for EncodeToJPG()
.