[MGNLDAM-1364] Uploading large files (>1GB) Created: 22/Dec/23  Updated: 26/Dec/23  Resolved: 26/Dec/23

Status: Closed
Project: Magnolia DAM Module
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Neutral
Reporter: Patrik P Assignee: Unassigned
Resolution: Not an issue Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Template:
Acceptance criteria:
Empty
Task DoD:
[ ]* Doc/release notes changes? Comment present?
[ ]* Downstream builds green?
[ ]* Solution information and context easily available?
[ ]* Tests
[ ]* FixVersion filled and not yet released
[ ]  Architecture Decision Record (ADR)
Bug DoR:
[ ]* Steps to reproduce, expected, and actual results filled
[ ]* Affected version filled
Date of First Response:

 Description   

Hello, I wouldn't exactly call this a bug, but I didn't know how else to categorize this issue.

Well basically as the name suggests I have issues uploading larger files to the DAM. The way I upload files to Magnolia is by using a rest api, basically I first create an asset and afterwards I keep uploading file chunks to it (storing them in a subNode of the asset - these are all saved as Binary). After I have all my chunks I try to save the asset this way (I get the chunks and create a stream):

InputStream in = new ByteArrayInputStream(out.toByteArray());
ValueFactory vf = damSession.getValueFactory();
Binary dataBinary = vf.createBinary(in);
resource.setProperty("data", dataBinary); 

This however does not work, an error is thrown:

org.postgresql.util.PSQLException: Unable to bind parameter values for statement.
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:390) ~[postgresql-42.6.0.jar:42.6.0]
    at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498) ~[postgresql-42.6.0.jar:42.6.0]
    at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415) ~[postgresql-42.6.0.jar:42.6.0]
    at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190) ~[postgresql-42.6.0.jar:42.6.0]
    at org.postgresql.jdbc.PgPreparedStatement.execute(PgPreparedStatement.java:177) ~[postgresql-42.6.0.jar:42.6.0]
    at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4]
    at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4]
    at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4]
    at org.apache.jackrabbit.core.util.db.ConnectionHelper.execute(ConnectionHelper.java:524) ~[jackrabbit-data-2.20.9.jar:2.20.9]
    at org.apache.jackrabbit.core.util.db.ConnectionHelper.reallyExec(ConnectionHelper.java:313) ~[jackrabbit-data-2.20.9.jar:2.20.9]
    at org.apache.jackrabbit.core.util.db.ConnectionHelper$1.call(ConnectionHelper.java:293) ~[jackrabbit-data-2.20.9.jar:2.20.9]
    at org.apache.jackrabbit.core.util.db.ConnectionHelper$1.call(ConnectionHelper.java:289) ~[jackrabbit-data-2.20.9.jar:2.20.9]
    at org.apache.jackrabbit.core.util.db.ConnectionHelper$RetryManager.doTry(ConnectionHelper.java:552) ~[jackrabbit-data-2.20.9.jar:2.20.9]
    at org.apache.jackrabbit.core.util.db.ConnectionHelper.exec(ConnectionHelper.java:297) ~[jackrabbit-data-2.20.9.jar:2.20.9]
    at org.apache.jackrabbit.core.data.db.DbDataStore.addRecord(DbDataStore.java:360) ~[jackrabbit-data-2.20.9.jar:2.20.9]
    at org.apache.jackrabbit.core.value.BLOBInDataStore.getInstance(BLOBInDataStore.java:132) ~[jackrabbit-core-2.20.9.jar:2.20.9]
    at org.apache.jackrabbit.core.value.InternalValue.getBLOBFileValue(InternalValue.java:623) ~[jackrabbit-core-2.20.9.jar:2.20.9]
    at org.apache.jackrabbit.core.value.InternalValue.create(InternalValue.java:379) ~[jackrabbit-core-2.20.9.jar:2.20.9]
    at org.apache.jackrabbit.core.value.InternalValueFactory.create(InternalValueFactory.java:108) ~[jackrabbit-core-2.20.9.jar:2.20.9]
    at org.apache.jackrabbit.core.value.ValueFactoryImpl.createBinary(ValueFactoryImpl.java:79) ~[jackrabbit-core-2.20.9.jar:2.20.9] 

This is also part of the error log

Caused by: java.io.IOException: Bind message length 1 073 741 889 too long.  This can be caused by very large or incorrect length specifications on InputStream parameters.
    at org.postgresql.core.v3.QueryExecutorImpl.sendBind(QueryExecutorImpl.java:1724) ~[postgresql-42.6.0.jar:42.6.0]
    at org.postgresql.core.v3.QueryExecutorImpl.sendOneQuery(QueryExecutorImpl.java:2003) ~[postgresql-42.6.0.jar:42.6.0]
    at org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1523) ~[postgresql-42.6.0.jar:42.6.0]
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:360) ~[postgresql-42.6.0.jar:42.6.0] 

I need to be able to upload files up to 2GB. The way I got this to work in the end is in my opinion very sub-optimal: right now I am already uploading the chunks and storing them, so instead of creating a single binary at the end of this whole file uploading process I just store the chunks and leave them be. Later when I want to retrieve the file, I have to stream them to my frontend application and build the file back up there. All of this is very annoying.

What I would like to know, if it is possible to upload larger files than 1GB, so that I can use TemplatingFunctions etc. to generate file links etc. etc..

 

I am using the community version, the db is postgres
Versions: Magnolia 6.2.40 and dam module 3.0.27



 Comments   
Comment by Richard Gange [ 26/Dec/23 ]

Hello patrikkirtap-

For questions about community edition please see our Google Groups User List.

BR
Rich

Generated at Mon Feb 12 05:08:40 CET 2024 using Jira 9.4.2#940002-sha1:46d1a51de284217efdcb32434eab47a99af2938b.