+18
Shelved

Allowing mass tagging of different values via CSV file

Adam Asar 1 year ago in Digital Asset Management updated by petra.tant 7 months ago 12 2 duplicates

In our current on-prem ADAM tool, we use the ADAM user interface to select the multiple assets that need to have their metadata updated. We then submit a job called “Stamp Metadata” in ADAM that asks for a .csv file containing the asset names along with all the metadata to be updated. The key is that the metadata can be different and unique per individual row in the csv.

This functionality does not exist in Aprimo SaaS and is heavily used in our on-prem environment when needing to mass-tag disparate records.

Content Types / Metadata

Duplicates 2

Shelved

We currently offer mass content updates through the stamping tool available through services. This functionality is still being considered to make it into the product at some point in time, but since this is not scheduled for the 2020 calendar i will shelve the topic for now.

Rejected

This is a duplicate from an earlier request. I will merge both.

Oh excellent!  Can you send me the link to the existing request.

Same for us.  We also need the ability to append to metadata fields, for example keywords.  Aprimo offers this through the Metadata Stamping tool, with the =CONCATENATE function in Excel.  We are deciding on adding this to our Aprimo Stack but first we need to tackle the duplicate issues and if this will help.  We are looking into our "Checksum" rule to see how it's configured. 

Hey Chris,

The metadata stamp tool may be able to solve some use cases, but issues like yours and ours are inevitable in the world of data management especially when merging legacy data or integrating systems.  I'm looking for a sustainable longterm capability to ensure as we grow/expand we can keep our environment clean. This request is for a tool included in the UI itself, like the legacy ETLSchemas setting that allowed you to create schemas to run uploads like this yourself.

To tackle our existing duplicate issue, I've used a combination of Checksum and Filename to identify and target duplicates.  After exporting the metadata for the set of assets with duplicate risk, I used these two fields to identify three groups of duplicates; Duplicate Checksum Only, Duplicate Filename Only, and Duplicate Checksum and Filename.  From there I ran a series of comparisons to identify which records had the most relevant metadata and where they are classified to come to a final list of records to keep.

Committed

Updating metadata through a CSV file is on our roadmap.

Petra, Do you have any update on the timeline when this item will be in production?