Uploaded image for project: 'Magnolia DAM Module'
  1. Magnolia DAM Module
  2. MGNLDAM-1364

Uploading large files (>1GB)

    XMLWordPrintable

Details

    • Bug
    • Resolution: Not an issue
    • Neutral
    • None
    • None
    • None
    • None

    Description

      Hello, I wouldn't exactly call this a bug, but I didn't know how else to categorize this issue.

      Well basically as the name suggests I have issues uploading larger files to the DAM. The way I upload files to Magnolia is by using a rest api, basically I first create an asset and afterwards I keep uploading file chunks to it (storing them in a subNode of the asset - these are all saved as Binary). After I have all my chunks I try to save the asset this way (I get the chunks and create a stream):

      InputStream in = new ByteArrayInputStream(out.toByteArray());
      ValueFactory vf = damSession.getValueFactory();
      Binary dataBinary = vf.createBinary(in);
      resource.setProperty("data", dataBinary); 

      This however does not work, an error is thrown:

      org.postgresql.util.PSQLException: Unable to bind parameter values for statement.
          at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:390) ~[postgresql-42.6.0.jar:42.6.0]
          at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498) ~[postgresql-42.6.0.jar:42.6.0]
          at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415) ~[postgresql-42.6.0.jar:42.6.0]
          at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190) ~[postgresql-42.6.0.jar:42.6.0]
          at org.postgresql.jdbc.PgPreparedStatement.execute(PgPreparedStatement.java:177) ~[postgresql-42.6.0.jar:42.6.0]
          at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4]
          at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4]
          at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4]
          at org.apache.jackrabbit.core.util.db.ConnectionHelper.execute(ConnectionHelper.java:524) ~[jackrabbit-data-2.20.9.jar:2.20.9]
          at org.apache.jackrabbit.core.util.db.ConnectionHelper.reallyExec(ConnectionHelper.java:313) ~[jackrabbit-data-2.20.9.jar:2.20.9]
          at org.apache.jackrabbit.core.util.db.ConnectionHelper$1.call(ConnectionHelper.java:293) ~[jackrabbit-data-2.20.9.jar:2.20.9]
          at org.apache.jackrabbit.core.util.db.ConnectionHelper$1.call(ConnectionHelper.java:289) ~[jackrabbit-data-2.20.9.jar:2.20.9]
          at org.apache.jackrabbit.core.util.db.ConnectionHelper$RetryManager.doTry(ConnectionHelper.java:552) ~[jackrabbit-data-2.20.9.jar:2.20.9]
          at org.apache.jackrabbit.core.util.db.ConnectionHelper.exec(ConnectionHelper.java:297) ~[jackrabbit-data-2.20.9.jar:2.20.9]
          at org.apache.jackrabbit.core.data.db.DbDataStore.addRecord(DbDataStore.java:360) ~[jackrabbit-data-2.20.9.jar:2.20.9]
          at org.apache.jackrabbit.core.value.BLOBInDataStore.getInstance(BLOBInDataStore.java:132) ~[jackrabbit-core-2.20.9.jar:2.20.9]
          at org.apache.jackrabbit.core.value.InternalValue.getBLOBFileValue(InternalValue.java:623) ~[jackrabbit-core-2.20.9.jar:2.20.9]
          at org.apache.jackrabbit.core.value.InternalValue.create(InternalValue.java:379) ~[jackrabbit-core-2.20.9.jar:2.20.9]
          at org.apache.jackrabbit.core.value.InternalValueFactory.create(InternalValueFactory.java:108) ~[jackrabbit-core-2.20.9.jar:2.20.9]
          at org.apache.jackrabbit.core.value.ValueFactoryImpl.createBinary(ValueFactoryImpl.java:79) ~[jackrabbit-core-2.20.9.jar:2.20.9] 

      This is also part of the error log

      Caused by: java.io.IOException: Bind message length 1 073 741 889 too long.  This can be caused by very large or incorrect length specifications on InputStream parameters.
          at org.postgresql.core.v3.QueryExecutorImpl.sendBind(QueryExecutorImpl.java:1724) ~[postgresql-42.6.0.jar:42.6.0]
          at org.postgresql.core.v3.QueryExecutorImpl.sendOneQuery(QueryExecutorImpl.java:2003) ~[postgresql-42.6.0.jar:42.6.0]
          at org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1523) ~[postgresql-42.6.0.jar:42.6.0]
          at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:360) ~[postgresql-42.6.0.jar:42.6.0] 

      I need to be able to upload files up to 2GB. The way I got this to work in the end is in my opinion very sub-optimal: right now I am already uploading the chunks and storing them, so instead of creating a single binary at the end of this whole file uploading process I just store the chunks and leave them be. Later when I want to retrieve the file, I have to stream them to my frontend application and build the file back up there. All of this is very annoying.

      What I would like to know, if it is possible to upload larger files than 1GB, so that I can use TemplatingFunctions etc. to generate file links etc. etc..

       

      I am using the community version, the db is postgres
      Versions: Magnolia 6.2.40 and dam module 3.0.27

      Checklists

        Acceptance criteria

        Attachments

          Activity

            People

              Unassigned Unassigned
              patrikkirtap Patrik P
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Checklists

                  Bug DoR
                  Task DoD