I know @Giuliana gave passed the task to gather this info, but nevertheless we should respect the legal implications. Especially since we have no formal legal body available and the person who entered is totally responsible for what has been published there.

I propose to close the spreadsheet down and switch to “invite only / request” view from now on.

Now people who use their bookmarks or our website can still access the compilation of addresses and phone numbers. Because it’s not that these numbers aren’t public, but their centralized compilation makes them vulnerable.

josefkreitmayer:

once that is done, delete the columns with personal contact details in
the public spreadsheet

Unfortunately this also doesn’t work. There is a question of our same concern, but deleting and recreating doesn’t work either, as it changes the link.

But apparently it’s possible to Copy everything to the clipboard, restoring to the first version (or an appropriate one before all the details came), and pasting the cleaned version again. We should make a copy (with a new link) beforehand, sure.

thx @toka for raising awareness. The issue seemed to be solved before, but due to some enabled filter, some adresses were still visible, and thx to the deeper research of @almereyda we also found the possible history leak.

I would suggest that we still display some more columns like a) existing mapping system, googlemaps/OSM, description. It would be useful for people to navigate the inventory.
Who is preparing the form?

Unfortunately this still doesn’t consider changes and only appends everything ever and ever again. We may want to use daff for the intermediary step. (dat and daff )
The wget part could also be exchanged with curl, if one wants to omit storing a csv locally next to the dat. If such a clone exists already, one changed into the $SOURCE folder and issued dat pull $PROTO$SOURCE instead for updating the local copy.

Further on, all improvements regarding UUIDs, thanks @toka for the menu entry, however that came into existence, have only been applied on the old link. We still have no workflow to securely share the UUIDs between both instances.
Again, using Google Docs as a database, as appealing as it may have been and continues to be, turns out to be a bad choice. This will have to be one main objective for an initial 2015 monkeys call.

semantic mediawiki, which seems to be the best tool, to collect this data collectively

Our experience tells us that the evaluation of “the best tool” can be quite tensing, once one starts to consider multiple options. As long as we only look at one solution, of course it will be the best.

The semantic MediaWiki does proposedly fit in for Ontology Design, but then it also concurrents with Ontowiki.
If the discussion is about managing many maps, the ideal interface should be a map itself. This is then another question to answer.