I volunteer with a small, local non-profit, and manage one of their programs that brings maps and spatial analysis to residents and communities. We create lots of maps for a variety of geographic areas and for a variety of themes using lots of feature classes.
Working in that environment it can be very difficult to do some pretty simple tasks. For example, many of the maps have feature classes (e.g. roads, parcels) downloaded from the local county. Assuming we have dozens of maps that use one or more of these classes, doing things like improving the symbology for a feature class or updating the maps with more current data and then propagating the changes into all the maps is very labor intensive if not impractical.
Another simple example is the exporting of the maps to jpeg and pdf versions for access by users. Every map we build has 5 non-mxd versions: the jpegs (300dpi, 72dpi, 72dpi thumb) and the pdfs (300dpi and 700dpi). To create them we do 3 exports (300dpi jpeg, and the 2 pdfs). We create the 72dpi jpegs with an image processing tool (e.g. irfanview).
There are quite a few processes like these that make up the overall “system” we’ve designed.
Although there may be specific suggestions for particular tasks (e.g. use Model Builder), ideas about how to design the overall data architecture for the entire service might be highly relevant.
Any advice would be highly appreciated. I’m even open to the suggestion that we describe the overall system to see if we’ve designed it well.