A rival translation of Origen’s Homilies on Ezekiel

Quite by accident I today learn of another projected translation of Origen’s Homilies on Ezekiel.  It’s due to appear in January 2010 as part of the Ancient Christian Writers series, and translated by Thomas Scheck, who has translated several other volumes of Origen’s homilies.  The Amazon advert is here.

Frankly this is a nuisance and a half.  We’ll probably beat that deadline; but who needs two competing translations?  More to the point, is it a sensible thing to do with my money?

Not sure what to do now.

UPDATE: I’ve written to Dr Scheck to ask the status of his work; but from his home page it appears to be complete.

I’ve done some calculations.  The whole lot is about 200 pages of Latin in the SC edition, at $10 per page is $2,000.  Of this, about a quarter is done and indeed paid for.  So we’re talking about a further $1,500.

Perhaps the answer is to go upmarket, and add a Latin text as well as a translation.

Share

14 thoughts on “A rival translation of Origen’s Homilies on Ezekiel

  1. My pleasure! It’s the least I can do in view of the many benefits I’ve gotten from your efforts to scan, edit, and publish many otherwise unavailable texts.

    Here is a more general thought: have you considered at all the possibility of applying recent developments in machine translation (i.e., statistical machine translation) for Latin->English translation? I’ve emailed Google a couple of time about this, but so far haven’t gotten a reply. The algorithms and software already exist; in theory, one just needs to process a large amount of parallel text — Latin sentences and corresponding English translations for previously translated material — to prime the translator. Then, voila!, one has the ability to translate the entire PL, etc. The translations wouldn’t be perfect, but they’d be a good starting point.

  2. This is a very interesting idea, and one that I had never heard of, since I’m not involved in the research world in this area. I will look into it some time. One barrier may be the looseness of existing translations, tho.

    Do you have any suggestions where to start? Where can one find out about all this?

  3. > Where can one find out about all this?

    http://en.wikipedia.org/wiki/Statistical_machine_translation

    Statistical Machine Translation (SMT) is a revolution in machine translation. Previously, people tried to translate text in terms of “deep structure”:

    1. Take a sentence in the original language
    2. Parse it and determine the deep structure (what it ‘means’)
    3. Express the deep structure in another language

    That paradigm dominated for many years, but proved fairly unwieldy. Then along comes an enterprising computer scientist who figures out that better translations can be made just on the basis of statistical correlations and probabilities. This method doesn’t make any attempt to ‘understand’ the text at all. If one has a lot of sentences in Latin which contain the word ‘rex’, and nearly always find the word ‘king’ in corresponding English sentences, the probability is high that ‘rex’ means king. The same principle applies for groups of words.

    By 2006 this paradigm replaced the old one, and Google Translate switched over.

    Personally I’m impressed with the accuracy. When I need to compose a letter in another language, I let Google Translate do the first draft, and then I touch up a word here and there. Increasingly it seems that few if any edits are necessary.

    Looseness of translations is possibly an issue. What seems best it to have text corpora where the Latin and English versions line up exactly sentence for sentence.

    Inter-translating modern European languages like English French or German Spanish is apparently facilitated by large EU databases containing simultaneous translations of government proceedings.

    The Bible would supply an obvious source of strict verse-to-verse parallel versions, but has comparatively few words as these things go.

    Complete parallelism isn’t necessary, however. I would think that algorithms for optimally aligning a pair of texts is an area that gets a lot of research attention.

  4. Easy enough to test — just translate the same text using your existing translation software and using Google translation. Systran was an industry leader until
    SMT took over.

  5. Systran is what I have been using, and for French it really is quite good. Today I did as you suggest, and parallel translated some of Agapius using Google. Quality is slightly better on Google, but it has a tendency to omit words which are actually important. There’s not enough in it, so far, to make much difference. This may not be true on German or Italian, tho, which Systran didn’t do very well.

    One amusing feature of Google is that I wound up with a bunch of Thous and Thees; I suspect a biblical passage had appeared, which had triggered some use of the KJV!?

  6. > wound up with a bunch of Thous and Thees;

    That’s pretty funny. SMT is, as you suggest, sensitive to the texts used for priming. Now if the results included ‘Ye’ a lot, purists might argue that this means you/plural, which has no modern English equivalent (except y’all).

    It seems to me that many institutions — from departments of classics, to Sources Chrétiennes, to even the Vatican — would be thinking about ramping up some large scale Latin or Greek translations. Potentially EU funding would be available.

    The trouble with large institutions, of course, is that they take a long time to get things done. If Google wanted, they could have an Latin–English translator running in a month.

  7. I wish that some large institution *would* get a slab of funding and just do the whole PL and PG. It wouldn’t cost a fraction of the sums wasted every year on funding third-world despots. In the UK Gordon Brown apparently wants to give $1.5bn away this year. We could do the lot for about $5m I would guess, and still have enough to fill up the car!

    I wonder if there is any way to suggest Latin to Google?

  8. I emailed one of their scientists but got no reply.

    Perhaps a letter from someone in a recognized institution would get their attention.

    That said, I’m not sure that Google is the optimal solution. The algorithms involved are fairly clearcut. There are lots of grad students these days who can implement them.

  9. I imagine n-grams would come in pretty handy for something like this; especially in cases where a word’s meaning is modified by it’s location in a sentence or by the words following or preceding it.

    http://en.wikipedia.org/wiki/Ngram

    I’ve actually seen n-grams applied to language detection algorithms, trying to figure out what language a body of text is in… and Google uses the concept to detect auto-generated spam content by looking for abnormal word groupings.

Leave a Reply