Discussion List
Public Group active 5 months, 4 weeks agoThis forum has been archived, and posting to it is no longer possible. However, conversations are still happening, at the Pelagios Google Group.
You can join the group and read archived discussions here: https://groups.google.com/d/forum/pelagios-announcements
Integrating geo-annotations in a reading/learning environment
Tagged: network, omeka, Recogito, tools, visualization
This topic contains 3 replies, has 3 voices, and was last updated by c-palladino 2 years, 6 months ago.
-
AuthorPosts
-
May 25, 2017 at 4:17 pm #2729
Visibility is an issue in Digital Humanities, and as long as we’re all engaged in producing digital content, very few people know the real data that are behind the fancy websites and the cool interfaces. We are all Academics and care so much about the quality of our work – then don’t give a damn if other humans can’t understand/see/use it for their benefit. (Which is kind of a paradox, considering that Academia claims to be “beneficial to humanity” as a part of its justification to spend public money.)
We recently had a conversation on making the current annotations visible in Recogito, in order to understand who’s annotating what at a glance – this is going to be a very important step to the process, and we will also be able to see the names of the annotators outside of our limited circle.
However, what would happen if I wanted to see the map of the text that I’m annotating, outside of Recogito (let’s say, in my personal website or in my Github page), possibly integrated with other resources – treebanking, translation alignment, XML-compliant text, other kinds of annotations? It seems that after the efforts of the Son of Suda Online (which cannot be considered complete, or “the standard” for digital editions by any means) attention on this topic has been decreasing, and editing properly digital editions, enriched with all the necessary content, is limited to single individual projects, which have more or less developed their own standard (http://www.dfhg-project.org/). I have a bunch of students who have been annotating, aligning and treebanking texts for their final exam and I would like to display their work somehow. Not to mention that this opportunity would make my work look good too 😛
I’d be interested in hearing your opinions on this issue. If there will be any technical advice on how to achieve this, it will contribute to an inspirational discussion.
-
May 26, 2017 at 7:27 am #2730
Many thanks @c-palladino for initiating this discussion. I couldn’t agree more that this is extremely relevant!
I hope I’m now not derailing a request for technical advice into too much of a fundamental discussion. But I think (and agree) there’s simply a big open gap when it comes to this issue.
Starting from within the limited confines of Recogito: I’ve spent quite some time recently thinking about how the results of individuals’ work can be made more visible there. Recogito has “profile pages” in principle. They are conveniently located at a nice URL (http://recogito.pelagios.org/{your username}), and were meant to serve as public “shopwindow” for a user’s work, similar to a twitter or GitHub profile page. Yet at the moment, hardly anyone knows about them – and for good reasons! The way they currently look is probably the best example for how things should NOT be 😉 (See our own Pelagios 3 profile page for a prime example!) Also, there is no search across (public) documents in Recogito, as @c-palladino says. As so often, our plans & ideas are bigger than our tight DH budet allows. But I still think we have at least a solid basis here, and all intentions to turn the profile pages into what they deserve to be.
But coming back to the real question of how we can combine outputs from different tools into one online publication: of course, standards are one aspect here. But I’m tempted to say: we have so many. Yet interoperability seems so very limited. (I’ll refrain from linking to that infamous XKCD comic here… 😉 ) It’s implementations that fit together which we lack! With Recogito, we’ve been trying to offer at least several options for getting data out. Getting data in to Recogito from elsewhere? That’s much less of an explored area. In principle, Open Annotation (OA) is the standard that would be the obvious candidate here; and I’ve heard comments along the lines of “well, if you use OA, everything should just work, right?”. But, sorry, nope:
- OA only covers the annotations, so in a system that deals with documents (and users, and versions, and authority files,…) we need extra conventions about how we cover those additional aspects, how things are packaged up, etc.
- there’s flexibility on how to model the insides of an OA annotation: e.g. for how to model the ‘selectors’, the parts of the annotation that identify where it is located within a document (and we are dealing with different document types here – plaintext, images, TEI…) Again, more conventions needed that are not necessarily fixed by OA.
To clarify: yes, there are standards for all of this! But we need concrete agreements on which ones we are using, and how we are using and combining them. I feel what we need now in digital projects is a much stronger focus on implementing some real interoperability scenarios, between existing tools, in actual practice.
Finding the funds to finance significant software development is arguably much more difficult than finding the time (and funds) to discuss approaches. Even more so if that development work should go, in coordination, across multiple projects. That invevitably requires larger-scale/longer-term commitment, from multiple sides. But IMO this is what we need if we want to progress.
Ok, standards rant over, now coming back to the original issue 😉
Another question, in my opinion, is: do we already have some good publishing platforms to begin with? Speaking again with my Recogito hat on, I’ve started to ponder, for example, how an integration (or, rather, “seamless data exchange”) between Recogito and Omeka could look like. Omeka is wildly popular as a publishing framework in DH, so perhaps it’s (one of) the answers to this? I know that others have specifically been looking into moving data from Recogito to Omeka as well. Personally, I’d find this an extremely exciting combination! But I would appreciate other/additional thoughts on this!
Finally (and really – primarily – to throw a deliberately controversial point out there 😉 ): Does DH perhaps need an online “publishing hub”?
With my personal computer science bias, GitHub always feels like the benchmark for everything. To me it seems that GitHub really changed the way people build and use open source software. Obviously, collaborative development was done before GitHub; and online repositories with nice code browsing Webpages have existed, too. But what GitHub IMO really got right is:
- content presentation: data formats automagically display in an appropriate rich presentation (code highlighted according to the specifics of the language, CSV rendered in tables, GeoJSON as a maps etc.); everything is perfectly linkable (code to individual lines or code sections); and you’ll get a nice & prominently displayed splash page by just dropping in a readme file
- the social processes: user’s personal identities are a primary entry point, rather than the code itself; there’s quick-to-grasp info on how old/active/collaborative a project is; and there are all the features for “following”, discussion etc. and last but not least a dead simple mechanism to create your own forks of other peoples’ work at the click of a button
This, IMO, is what makes peoples work look good on GitHub and makes things discoverable. While at the same time, GitHub doesn’t enforce any form of structure, or limits to what you put into your repository.
So my question: blindly ignoring issues of centralization, single-point-of-failure, and operations and responsibilty for the sake of discussion: isn’t such a hub (physical or virtual) for DH projects results and data formats something that’s missing?
-
This reply was modified 2 years, 6 months ago by
Rainer Simon.
-
This reply was modified 2 years, 6 months ago by
Rainer Simon.
-
This reply was modified 2 years, 6 months ago by
Rainer Simon.
-
This reply was modified 2 years, 6 months ago by
Rainer Simon.
-
This reply was modified 2 years, 6 months ago by
Rainer Simon.
-
May 27, 2017 at 1:09 pm #2736
Just a quick intervention on my part re. Recogito’s relationship with publishing platforms: I’m entirely in agreement with Rainer about the importance of being able to crosswalk from Recogito to other DH platforms out there. Something that I’d really like to see is where you’d be able to visualise and analyse the annotations made in Recogito in different ways (say for network analysis)—so, Recogito as part of a pipeline from data curation to data analysis.
In fact, this is an especially timely thread since there are two events coming up this year where we might be able to make some progress on this.
The first is a workshop that Chiara and Sarah from the Pedagogy WG are looking to hold sometime this year (possibly at the beginning of December). This will be at the University of Virginia’s Scholars’ Lab, which, among other cool things, has developed Neatline, a suite of online tools that allows users to mashup textual and map data. Being based on Omeka, Neatline could be a prime case study for investigating what “an integration (or, rather, ‘seamless data exchange’) between Recogito and Omeka could look like”…
The second is Linked Pasts III, also due to take place at the beginning of December, which this year will be hosted at Stanford Humanities Center’s (SHC) and the Center for Spatial and Textual Analysis under the direction of Nicole Coleman. As she discussed during her presentation at Linked Pasts II in Madrid, her team there have for some time been particularly interested in data visualization and network analysis. I for one would be really interested to see whether any of the range of tools that they’ve been developing could again be modified to ingest data coming from Recogito.
Exciting times.
elton
-
June 1, 2017 at 9:26 am #2741
Thanks @rainer and @etebarker for the acute responses!
This is definitely the time for having this conversation. As our projects
are growing and DH is quickly raising to the status of “discipline” rather
than “method”, interoperability still seems something that people are not
that interested in. I am tempted to ask whether interoperability at this
stage is just too complicated, or if it simply goes against the laws of the
market, where it’s more important (easier?) to get funding for new projects
and platforms, than integrating them with already existing material.As for Recogito, again, one of the things that are missing is a hub, where
we could see what people have been doing with the Recogito data after
having annotated their texts (if they are doing anything at all). In this
regard, the workshop that @etebarker refers to will be a good framework to
raise the subject, and maybe we could also issue a small “call for posters”
intended for Recogito users to show how they have been using the
annotations.@rainer, what you say is completely on point here. To reply to your last
question – no, I don’t think there is anything like GitHub that’s addressed
specifically at DH projects. Omeka is close to it, but it’s far less used
and intuitive than GitHub, and it’s less usable for open collaboration. As
for Github, it definitely has its strength, but also its weaknesses: it was
built for Computer Scientists, and everything is a direct consequence of
that. Including the fact that it really lacks any publication standard,
because in CS you would go ahead and publish your code even if it’s
complete garbage – but in Academia this is what DH is most criticised for.
Second of that, it just doesn’t get the principle of citation and
bibliography. As a consequence, many projects are just floating around in
the GitHub limbo, without having an appropriate visibility, because there
is simply no way to find a resource if you don’t know exactly what you are
looking for. A simple tag system (as in jstor or Academia.edu) would solve
that, and make any publishing hub more usable for disciplines where the
bibliography *is *important.So, YES, we need a universally agreed publishing “way” – not just a
platform, but a *method* that everybody agrees on, that’s far more complex
and flexible than traditional platforms – something quite like Github, but
different 😉 And if anyone in Commons has any suggestion on that, I would
be completely open to see what the alternatives are.Secondly, to go back to the technical issue of integration: the problem in
the lack of standards (which immediately reminded me of that infamous comic
you mentioned…) also affects the methods. OA is commonly agreed as a way
to integrate resources, but if my academic partners don’t get OA, my
chances to integrate my project with theirs are simply gone. But this is a
bigger problem, what affects the whole world of Linked Data: my perhaps
naive question here would be, is it us that don’t get what LOD technically
needs to ensure interoperability, or is LOD that has some issues that need
to be discussed by the global community? Moreover: what kind of use is
currently done of APIs and plugins, which could perhaps solve some of the
issues? -
AuthorPosts
You must be logged in to reply to this topic.