Jump to content


Joining a table in ArcView! tricky?

- - - - -

  • Please log in to reply
1 reply to this topic

Covasnianu Adrian

Covasnianu Adrian


  • Validated Member
  • PipPip
  • 48 posts
  • Gender:Male
  • Location:Iasi
  • Romania


I'm trying to join 2 types of data:
First data contains a shapefile with the localities (13.749 features) and the second one a table with 17.661 containing the localities and the cable networks. The difference between these two types of data is represented by the fact that the number of cable networks is bigger then the localities.

Now, I'm willing to spatially represent those cable network companies in a GIS environment.

Is it possible?

The table contains 3 columns: villages, counties and name of the network company.
The shapefile contains the villages, counties and more.

I know that the joining command must have an identical element, in my case should be the villages.

I don't know how to deal with this issue!

Any suggestion is useful for me!
GIS user

PhD geographer
CUGUAT-TIGRIS Research Center
University Al.I.Cuza Ia┼či
Faculty of Geography & Geology

email: covasnianu.adrian@gmail.com

J Wallace

J Wallace


  • Validated Member
  • PipPip
  • 11 posts
  • United States

I assume that the discrepancy between the number of entries is due to the fact that a single village may have more than one cable network?

With higher arc license levels (info,editor) I believe this can be accomplished through a one-to-many table relate, but with arcview, I have only been able to work around this problem through manual editing of the table info. In order to ensure that each village (the join field) has only a single row of data associated with it, each additional cable network should be added as a new column heading (Cable network 1, Cable network 2, ...) in the spreadsheet. Joining this reformatted table to the shapefile using the 'village' ID should then result in an attribute table that looks like this:
Village_ID, Cable_network_1, Cable_network_2, etc...

Granted, this approach becomes less tenable as the size of the dataset grows (and 17000+ probably falls into this category).

Does anyone else have a less manually-intensive workaround for this problem when dealing with non-standardized data?

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users