[EXTDAM-245] [S3] Inconsistent behavior when uploading assets through console Created: 12/Mar/21  Updated: 03/Aug/21  Resolved: 03/Aug/21

Status: Closed
Project: External DAMs
Component/s: s3
Affects Version/s: 1.0.4
Fix Version/s: None

Type: Bug Priority: Neutral
Reporter: Richard Gange Assignee: Unassigned
Resolution: Not an issue Votes: 0
Labels: maintenance
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
relation
Template:
Acceptance criteria:
Empty
Task DoD:
[ ]* Doc/release notes changes? Comment present?
[ ]* Downstream builds green?
[ ]* Solution information and context easily available?
[ ]* Tests
[ ]* FixVersion filled and not yet released
[ ]  Architecture Decision Record (ADR)
Bug DoR:
[ ]* Steps to reproduce, expected, and actual results filled
[ ]* Affected version filled
Date of First Response:
Epic Link: Ext DAMs maintenance & partnership support

 Description   

Either the choose dialog is correct or the S3 subapp is correct but never at the same time.

When uploading an asset through the AWS management console it's not always available in both places at the same time. Sometimes you can see the new asset in the choose dialog but not in the app and vice-versa.

Running the groovy script mention in the docs has no effect and it's not clear that it's doing anything at all.

Expected
Everything stays consistent. It would be helpful to have some clue that the cache has been updated.

Notes
Some things to be clarified:

  • Does the groovy script work? Is it possible to add some debug to the code to verify the flush?
  • Can we create a command for triggering the flush? ADCOM-6
  • Are you required to run the webapp on AWS for instant performance? What are the possible limitations from running the webapp locally (or outside AWS) and connecting to AWS with the S3 connector?


 Comments   
Comment by Richard Gange [ 12/Mar/21 ]

It does seem like everything behaves better when the app is running on AWS vs running the app on my local machine.

When setting the timeouts to zero and running on AWS I'm not having any problems. Everything is available immediately.

cacheConfigurations:
  s3:
    caches:
      s3-objects: maximumSize=500, expireAfterAccess=0m
      s3-buckets: expireAfterAccess=0m
      s3-count: maximumSize=500, expireAfterAccess=0m
      s3-pages: maximumSize=500, expireAfterAccess=0m
Generated at Mon Feb 12 01:53:19 CET 2024 using Jira 9.4.2#940002-sha1:46d1a51de284217efdcb32434eab47a99af2938b.