The hunt for a copy of a 1969 survey report, detailing sub-aqua dives on the ruins of the Roman fort offshore at Felixstowe, continues! I had a nice email back from the local sub-aqua club, who no longer have a copy. But it seems that the report’s author, a “J. Errington”, is in fact “Jeff” who has a business interest in “Diveline” in the St Johns Road in Ipswich. So … another email. Mr E. must have been a rather young man back in 1969. Let’s hope that he still has a copy!
I’ve had rather a busy week, ending with a rather splendid college reunion. But of course everything else has gone out of the window, and I also have rather a large sleep debt to pay off.
Today brings another chunk of translation of an early Latin Vita of St George. Chapters 9 and 11 are in my inbox now. The version is a very rough draft. The only difficulty is that the translator doesn’t read my emails with feedback, so makes the same mistakes every time. This means that I shall have to correct and finish it myself. I hope to do the job on these chunks this week. The translation is going forward nicely, tho; some 8 chapters still to do.
Today also brought a welcome email from the Colchester and Ipswich Museum Service with unwelcome news. In 1969 a team of divers surveyed the ruins of a Roman fort in the sea off Felixstowe, known locally as Walton Castle. A report was filed with the museum, and was accessible a decade ago. The email today tells me that they cannot locate it now. I have written therefore to the sub-aqua club, who may have it in their files. Another email went to the Suffolk Institute of Archaeology, who published the article mentioning the survey, to see if I can get in contact with the author in case he has a copy. We tend to think of museums and archives as safe repositories. But the truth is that history is vanishing before our eyes. So it has always been.
Last week I was working industriously on the new QuickLatin. This is going well, and crude errors are disappearing. I must get a version released online, as a base version for further work.
My backlog of interesting topics to blog about continues to increase. So much to do!
It is Saturday evening here. I’m just starting to wind down, in preparation for Sunday and a complete day away from the computer, from all the chores and all my hobbies and interests. I shall go and walk along the seafront instead, and rest and relax and recharge.
Sometimes it is very hard to do these things. But this custom of always keeping Sunday free from everything has been a lifesaver over the last twenty years. Most of my interests are quite compelling. Without this boundary, I would have burned out.
Phase 2 of the QuickLatin conversion from VB6 to VB.Net is complete. Phase 1 was the process of getting the code converted, so that it compiled. With Phase 2, I now have some simple phrases being recognised correctly and all the obvious broken bits fixed. The only exception to this is the copy protection, which I will leave until later.
Phase 3 now lies ahead. This will consist of creating automated tests for all the combinations of test words and phrases that I have used in the past. Code like QuickLatin has any number of special cases, which I have yet to exercise. No doubt some will fail, and I will need to do some fixes. But when this is done then the stability of the code will be much more certain. But I am trying to resist the insidious temptation to rewrite bits of the code. That isn’t the objective here.
I began to do a little of this testing over the last few hours. Something that I missed is code coverage – a tool that tells me visually how much of the code is covered by the tests. It’s an excellent way to spot edge-cases that you haven’t thought about.
It is quite revealing that Microsoft only include their coverage tool in the Enterprise, maximum-price editions of Visual Studio. For Microsoft, plainly, it’s a luxury. But to Java developers like myself, it’s something you use every day.
Of course I can’t afford the expensive corporate editions. But I think there is a relatively cheap tool that I could use. I will look.
Once the code is working, then I can set about adding the syntactical stuff that caused me to undertake this in the first place! I have a small pile of grammars on the floor by my desk which have sat there for a fortnight!
I’m still thinking a bit about the ruins of the Roman fort which lies under the waves at Felixstowe in Suffolk. This evening I found another article exists, estimating how far the coast extended and how big the fort was. It’s not online, but I think a nearby (25 miles away) university will have it. I’ve sent them a message on twitter, and we’ll see.*
I’ve also continued to monitor archaeological feeds on twitter for items of interest. I’m starting to build up quite a backlog of things to post about! I’ll get to them sometime.
* They did not respond.
- J. Hagar, “A new plan for Walton Castle Suffolk”, Archaeology Today vol 8.1 (1987), pp. 22-25. It seems to be a popular publication, once known as Minerva, but there’s little enough in the literature that it’s worth tracking down.↩
WordPress decided, without my permission, to install version 5.1, complete with their new but deeply unpopular “Gutenberg” editor that nobody either wanted nor requested. I can’t downgrade from 5.1, but I’ve managed to get rid of the useless Gutenberg editor. Let me know if there are any funnies.
In late antiquity the Saxons started to make raids into the Roman province of Britannia. This they did by sailing across the North Sea – the Narrow Seas, as it is also known – in open boats. In response to this the Romans built a chain of impressive forts along the British coast, under the command of a “Count of the Saxon Shore”. A couple of these still exist in significant form, at Pevensey where a Norman keep was added, and at Burgh Castle in Suffolk. Others have disappeared.
One of the vanished forts was located on the Suffolk coast somewhere, at “Walton”. There is more than one place of that name available. There is Walton, which was a medieval village near a small fishing hamlet named Felixstowe. But at the start of the 20th century Felixstowe became a seaside resort and has since swallowed up the whole area. There is also Walton-on-the-Naze. At neither place is there anything to be seen. General publications on the area often cast doubt on whether there was any such fort.
At the weekend I came across an article published in 2000 on “Walton Castle”. It seems that, far from being unknown, the fort stood on the cliffs at Felixstowe, to the south of a valley called “The Dip”. There are 18th century sketches of the walls, which stand almost full-height. The coast-line is sand, and is eroding, and so the whole structure progressively fell into the sea around 1800.
But there is more. Great masses of Roman brick, stone and concrete do not just disappear, just because they have fallen down thirty feet and been covered by water. They should still be there. The article sadly suggests that a lot of it was hauled up and used for hardcore at Harwich port at the same time.
However, it seems that in 1969 the local sub-aqua club went out and surveyed the area, and a report was filed with the local museum in Ipswich. The article refers to this as the “Errington manuscript”. No doubt this is typescript and hand-drawn diagrams.
Note the dark mass in the sea, right in the centre of the photograph. This, the site says, is a bit of wall poking above the waves.
Last night the low tide was at 16:50, so I went down there and had a look. There was nothing to be seen. Probably it requires an unusually low tide. But it was hard to see exactly from where the photograph was taken. I went again this morning and just scouted the area. But I need to go again.
Also this morning I contacted the Ipswich Museum, and I have written an enquiry to their collections team to see if I can get hold of a copy of the Errington manuscript. This ought to specify precisely where the ruins are, and what is to be seen.
It is fifty years since the original dives were made. I wonder if the local sub-aqua club (which still exists) might be interested in going out there again? I wonder how much there is there? What can be seen? If the ruins are that close to the surface, what might a drone be able to photograph on a still day?
All very interesting anyway. I’m still working on QuickLatin, but I will look into this further and write more!
This is another highly technical post, so I apologise to those readers with no interest in programming.
This week I have continued the ghastly process of migrating the 27,000 lines of code that make up QuickLatin from Visual Basic 6 to VB.Net 2008.
I found that the “Upgrade Wizard” for VB6 was no longer included in versions of Visual Studio later than 2008. So I don’t really get a choice on which version of dotNet to use. That said, I have found that it works quite well, so long as you approach the problem in the right way. You will, in fact, have to adopt this approach whatever tool you use.
The first thing is to place the existing code under source control. You will need to change this code a lot, to make it fit to convert. Sometimes you will get it wrong, and need to revert back to the last checked-in version. Believe me, you will! I checked in code after each small set of changes. It was the only way.
You see, the key to converting a VB6 application to VB.Net is to keep the application working at all times. Don’t simply launch the upgrade wizard and then end up with 50,000 errors in the VB.net version, and then start at one end to fix them all. You will just give up!
Instead, make changes to the VB6 version of the code. Make small changes, check it works, check in the change. Then do another; and another.
We all know the sort of things that don’t get converted OK:
- Fixed strings in user-defined types (UDTs). So code these out. Convert them to non-fixed strings, a little at a time. Of course you created them in order to do Win32 API calls? Well, they won’t convert, and you will have to recode this stuff by hand. So, do a little recode in VB6.
- API calls, i.e. stuff that you added in to do more hairy stuff. Recode to remove them. You may end up just commenting the contents of the functions out. I have stuff that does a splash screen. I don’t need that code in VB.Net, which has a built-in SplashScreen template. So with other things.
- Front-end forms stuff. No need for splitters – VB.NET has its own SplitterContainer.
There are many more. Some are listed in a Microsoft document “Preparing Your Visual Basic 6.0 Applications for the Upgrade to Visual Basic.NET” (no doubt this link will fail in a couple of years, thanks to Microsoft’s idiotic policy of moving files around on their website). If you page down past the general chit-chat, at the bottom is a list of stuff that won’t convert. Fix these.
If you do this, you can eliminate most of the rubbish, while keeping most of your code still working.
Once you have done this, then do a test of the Upgrade Wizard. Expect to throw it away; but it will give you a list of failures to address in VB6.
Once I did this, I ended up with some 37 errors in VB.Net. That was a tiny number! Most of these I fixed in VB6, and reran the Upgrade Wizard several times. By the end my VB6 application was rather damaged, and much of the UI didn’t really work. But the logic engine was still running just fine.
A few things just won’t convert. But you can fix a few on the other side.
Once your VB.Net application compiles, you can try and run it. It will fail, of course. This bit is just slog. You find out what is wrong, and then consider if you can code it out in VB6 and re-upgrade. Often you can.
QuickLatin has a lot of file-handling. VB6 was slow in reading files, so I created a .DLL written in Visual C++, purely to grab a file and squirt it into an area of memory mapped to an array of UDTs. Needless to say this did not convert to dotNet! So what I did was write a slow version, in raw VB6, which did the same thing. I unpicked the optimisation, knowing that I could reoptimise on the other side of the upgrade process.
I ended up recompiling the DLL. I’m not sure when I wrote that, but it was probably in Visual Studio 6. It produced a DLL which was around 80k in size. The version of the same code, produced by Visual C++ 2008, was 107kb. It did exactly the same; but Microsoft’s lazy compiler developers had bloated it by 25%. Microsoft was always notorious for code bloat. I remember that when IBM took over the source code for OS/2, they were horrified at how flabby it all was, and rewrote most of it in assembler.
However I couldn’t get the DLL to work in VB.Net, whatever I did. So… I eliminated it and accepted the slower load, for now.
I’ve now reached the stage where the code runs, but it isn’t doing it right. This is not unexpected. The change from arrays based on 1 to arrays based on 0 was always likely to break something. But it’s an opportunity to make use of a VB.Net feature, and create unit tests! I have started to do this, not without pain.
Of course even VB.NET 2008 is now more than a decade old. At that time the idea of loose coupling and dependency injection was only just coming in. I gather that even today Visual Studio 2019 doesn’t really support this all that well. To a professional Java developer like myself, the idea that the DI equivalent of Spring isn’t even on the mental horizon of Microsoft staff is extraordinary. I’ll manage somehow; but why can’t I just annotate my classes in dotNet, as I do in Java? Why?
In the course of today, it has become clear to me that Microsoft’s developers never used VB6 themselves, nor do they use VB.Net. If they had, they would never have created this huge roadblock to upgrading VB6. There is still a huge amount of VB6 out there in corporations. But Microsoft’s staff couldn’t care less. Indeed there always was a lot of it about, which is one reason that I found it expedient to learn some, all those years ago. Job security consists of having the skills that people want, even if they don’t want to see them on your CV.
Had Microsoft’s developers ever used VB6 internally, they would have collared the VB.Net team and given them a straight talking-to.
Likewise anybody who does the upgrade immediately finds that he can’t do a lot without massive refactoring. Again, this shows that nobody at Microsoft actually ever went through this process. Because you don’t want to do a load of refactoring. You have no tests. You might break stuff. You can’t easily create tests either for code in Modules, rather than classes.
Microsoft was founded by Bill Gates, who owed his start to writing a Basic Interpreter for some of the early microprocessors. So Basic was important to him. It seems that in later years it wasn’t important to anyone else. This is a shame.
Microsoft was very arrogant in the 90s and early 2000s. Few were sorry when Google knocked them off their perch.
Oh well. Onwards. Thank heavens I have lots of time right now, tho!
I’ve been continuing to work on QuickLatin. The conversion from VB6 to VB.Net is horrible, but I am making real progress.
The key to it is to change the VB6 project, so that it will convert better. So for instance I have various places at which I make a raw Win32 API call, because VB6 just doesn’t do something. These must mostly go. I replace them with slower equivalents using mainstream VB6 features. In some cases I shall simply have to rewrite the functionality; but this is mainly front-end stuff.
All the same, the key point is to ensure that the VB6 project continues to work. It is essential not to allow this to fail, or develop bugs. This is one area where automated unit tests would be invaluable; but of course that concept did not arise until VB6 was long dead. So I have to run the program manually and do a few simple tests. This has worked, as far as I can tell.
The objective is to have a VB6 project that converts cleanly, and works out of the box. It may be slower, it may have reduced functionality in peripheral areas. But the business logic remains intact – all those hard-crafted thousands of lines of code still work.
It’s going fairly well. I’ve been working through known problems – arrays that need to be base 0 rather than base 1. Fixed strings inside user defined types have to go. There is a list on the Microsoft site of the likely problems.
Today I had my first attempt at running the VB.Net 2008 Upgrade Wizard. It failed, as I expected it to do. The purpose was to identify areas in VB6 that needed work. But the converted code only had 37 errors. Only 3 of these were in the business logic, rather than the front-end, and all were easily fixed in VB6. There were also a large number of warnings, nearly all of them about uninitialised structures. Those can wait.
So my next stage is to do something about the 34 front-end errors. Probably I shall simply have to comment out functionality. Splitters are done differently in VB.NET. The CommonDialog of VB6 no longer exists to handle file opening. That’s OK… I can cope with rewriting those.
It has reminded me how much I like programming tho.
In the middle of this enormous task, of course, there are no lack of people who decide to email me about some concern of their own. So … polite refusals to be distracted are now necessary. I hate writing those. But a big project like this can’t get done any other way.
It’s been an interesting couple of days.
I was working on the Passio of St Valentine, and I really felt that I could do with some help. So I started browsing grammars.
This caused me to realise that many of the “rules” embedded in them were things that you’d like to have pop-up, sort of as an informational message, when you were looking at the sentence in a translation tool.
This in turn reminded me that my own morphologising tool, QuickLatin, was available and a natural candidate for such a thing.
This is written in Visual Basic 6. I wrote most of it, actually, in Visual Basic for Applications, inside a MS Access database, during 1999. (The language choice was dictated by the machine that I had available at the time, which had no development tools on it). I then ported it to Visual Basic 6. Microsoft then kindly abandoned VB6, without even a migration path, some time in the early 2000s. This left me, and many others, stuck. It is not a trivial task to rewrite 24,000 lines of code.
So where was my development environment? I pulled out the last four laptops that I have used; I have them, because I keep all my old machines. I found it on my Windows XP machine. The machine started up OK! In fact the batteries on the Dell laptops all started to charge, unlike a Sony Vaio which had Windows 7 on it.
The Windows XP machine had a tiny screen and was very old. Could I perhaps install VB6 on Windows 10 instead? The answer swiftly proved to be a resounding “no”. But I gathered a large number of tips from the web while doing so.
Then I tried installing VB onto my travelling laptop, which has Windows 7 on it, using all the info that I had. The installation failed; but the software seemed to be installed anyway!
Then I tried doing it again on Windows 10. This time I had a sneaky extra bit of information – to set the SETUP.EXE to run in Windows XP compatibility mode. And … again it failed; but as with Windows 7, I could in fact still run it!
The process was so fraught that I knew that I’d never remember all the fixes and tips. So I compiled all the bits together, hastily, into a reference guide on How to Install Visual Basic 6 on Windows 10, for my own use in days to come.
After two days of constant pain, I was at last in a position to work on the code!
But I wasn’t done yet. I really would rather not work with VB6 any more. Not that I dislike it; but it is emphatically a dead toolset. My attempts to convert my code to VB.Net all failed.
But since I last looked, more tools have become available. My eye was drawn to a commercial product, which Microsoft themselves recommended, by a firm called Mobilize.net. The tool was VBUC. You could get a free version which would convert 10,000 lines. Surely, I naively thought, that would be enough for me?
Anyway I downloaded VBUC, and ran it, and discovered to my horror that I had nearly 30,000 lines of code! But I set up a tiny test project, with half-a-dozen files borrowed from my main source project, and converted that. The process of extracting a few files drew my attention to what spaghetti the codebase has become. It was not trivial to just take a few. This in turn made me alter the extracted VB code a bit, so that I could use it.
Converting the extract worked, but required some manual fixing. However it did work in the end.
I was quite impressed with some of the conversions. One of the StackOverflow pages had indicated that the firm were charging a couple of hundred dollars for the tool, back in 2010. So I emailed to ask what they were charging now.
Mobilize.net then got a bit funny on me. Instead of telling me, they asked me to tell them what I wanted it for. I replied, briefly. Then they wanted me to run an analyser tool on my code and send it in. I did. Then they wanted more details of what it did. Quite a few emails to and fro.
By this stage I was getting fed up, and I pushed a bit. They finally came back with a price, based on lines of code, of around $4,500! That was ridiculous, and our exchange naturally went no further.
However I had not wasted my time, for the most part. I could now see what the tool might do. My code may be elderly, but some of the bits that were converted are basically the same throughout. It is quite possible that I could write my own tool to do the limited subset of changes that I need.
One problem that was not handled well; QuickLatin loads its dictionaries as binaries, created by another tool of my own. I found that VB.Net would not handle these, whatever I did. The dictionaries would need to be regenerated in some other format.
So I spent some time experimenting with an XML format. I quickly found how slow the VB6 file i/o was. Reading a 20 mb file using VB native methods took 4 seconds. Using MSXML to load the file and parse it into a linked list took 1.7! I didn’t want the linkedlist method; but it was clear that the VB native methods were hideously inefficient.
I soon discovered complaints online that the VB.Net i/o did not support the methods used by VB6 and was even slower! I’ve encountered problems of this sort before, which I got around by dropping into C++ and accessing the files through bare metal. Clearly I would have to do so again.
Another problem that VBUC showed me was that VB6 fixed length strings were not really supported by VB.Net. There was some sort of path, but it was horrible. However there was, in fact, no reason to go that way; the file i/o, for which they were used, will have to change anyway.
I placed my code base under code control, using GIT. Then I started cautiously making changes, checking that “amas” was giving sensible results – for unit tests were unknown in the days of VB6 – and committing regularly. This proved wise; several times I had to go back to the last commit.
I spent quite a bit of time removing superfluous fixed strings from the code. This was not trivial, but I made headway.
Something else I did, once I realised that coding lay ahead, was to rig up an external monitor, keyboard and mouse to my laptop. I would have rigged up two, but there was no way to turn off the laptop screen – when you close the lid, the machine goes to sleep and that’s that. On a commercial laptop, I’d set it to turn off the laptop screen and stay running. Most graphics cards will support two monitors; the home laptops won’t support three. Oh well. But it was still better for serious work than using the laptop screen and keyboard alone.
Finally I started creating dictionary loading routines that would convert to VB.NET. They are much slower; but I can optimise them when I get the code into VB.NET. They have to change, come what may. The key thing is to keep the program running and working at all times. Take it slow. Little by little If I take it apart into a million pieces, it will never get back together again. Indeed this mistake I have made before.
Back in the 90s, automated unit tests, continuous integration, test-driven development and dependency injection were all unheard of. I have really missed having a set of tests that I can run to check that the code has not broken in some subtle way. This again is a reason to migrate to VB.Net, where such is possible. I did write test stubs in the original VBA, but there was no way to run them within VB6. At least I have them still, and they can form the basis for unit tests.
So … it’s been a very busy few days indeed. Nothing to show for it, to many eyes; but I feel optimistic.
The next challenges will be to change the other dictionaries over to the slow-but-safe method, and then remove all the stuff that supported the other approach. This should simplify the code mightily. Once this is done, then it will be time to attempt to convert the code. Somehow. All I need is time, and with luck I shall have some of that this week.
It is remarkable how far down the rabbit-hole one must go, just to get a bit of online help!
When I was 11 years old, I was transferred to an old-fashioned northern grammar school. This kept up the tradition of Latin and Greek, and Latin began at 11, and continued until 16.
The textbook used was Paterson and Macnaughton, The Approach to Latin. This was actually the first volume of a three book series. It continued in The Approach to Latin: Second Part, and there was also The Approach to Latin Writing.
I remember the Latin classes well. It was a devil to learn, but useful to know, and I was fortunate to have an excellent teacher.
Schoolboys are hard on books. At some time in my later years there, the school disposed of unwanted copies of the last two volumes, to anyone who would take them away. I picked up the latter two, and they lived in an old briefcase in my loft for many years until I brought them down a decade ago and shelved them with my Latin books.
There was no opportunity to collect a copy of the first volume – the only one I ever used. But I was able to purchase a reprint in Cambridge in the late 1990s, from Heffers Bookshop, for £9.90.
These little books must have been printed in tens of thousands, but how many remain today? Searching for copies online on Amazon and Abebooks, I was shocked to find that all these books are now sold for princely prices. One copy was offered at over $1500 today, which is ridiculous.
I looked inside them last night, and took Second Part to bed with me, to read through the slender sections on Latin syntactical constructions.
The introduction states that they contain “a selected minimum of grammar”, and this is very true. The vast bulk of the book consists of exercises. Each bit of grammar, often half a page, is followed by two or three pages of exercises. This is, of course, frustrating if you want hard information.
The actual grammatical content is very concise. This morning I was reading a chunk of the Writing book, explaining something, and I wondered if there was a misprint! I had to reread it two or three times before I could work out what they meant. That the book was meant to be taught by a human teacher was evident.
I do not believe that my schoolboy self could have decrypted that sentence. In this sense, it is not a good teaching book. I know that my old teacher preferred Coles, Latin Grammar Simplified.
The exercises from The Approach to Latin I remember well. They started easy and became harder and harder. I loathed translating English into Latin. But in fact, it is clear that the authors’ intention was to prepare boys to translate from English into Latin, doubtless for examination purposes. This appears clearly from the grammatical material, now I look at it.
The limited syntactical material is not well presented. But it caused me to think that I could extend the functionality of QuickLatin, my morphologiser, to flash up some possibilities whenever you see the word “ut” or “sicut”, or find a gerundive. It wouldn’t be hard to do that. At least, it shouldn’t be.
QuickLatin was originally written in 1999, during a long and pointless government contract. It was written in Visual Basic 6, that much missed and easy to use tool that Microsoft sold at that time. From time to time I have tried to rewrite it in something newer; but always I have had to stop, and go and earn a living.
Microsoft have never troubled to maintain their own tools, or to guarantee backward compatibility. VB6 was phased out in favour of the incompatible VB.Net, almost twenty years ago. VB.Net is now being phased out, with nothing to replace it. I did attempt a rewrite in Microsoft Visual C/C++, and got so far. When I came to pick up the project, a couple of years later, it wouldn’t even compile in the newer version of the tool.
Of course 1999 was aeons ago, by modern software practices. Today everything is done with Test-Driven Development (although Microsoft never liked it, and their support for it in .NET was always rubbish). Everything is divided into classes and objects. VB6 stuff predates that. It was always hard to work with.
Anyway, I’m stuck with VB6 code. You can’t even convert it to VB.NET, incredibly. Thus I’ve not put out a new version of QuickLatin in years. But VB6 continued to work on Windows 7, as I recall. I still have the disks for VB6.
It is apparently possible to install this on Windows 10. But … for me at least … it has not worked.
This is why computers are frustrating. You decide to do something and then, little by little, get led softly away from that into a Byzantine series of other tasks.
The task I want to do is translate some Latin. To do this, I need to know more syntax. To help with this, I’d like to modify QuickLatin. To do that, I need to install VB6. To do that… I need to find workarounds. Maybe install a virtual XP machine. Which means… yet more stuff to do. All of which take time and prevent me doing what I actually wanted to do!
And so another day disappears.
In the last few days I have been looking at the Latin text of the passio of St Valentine of Interamna / Terni. It’s a while since I did any Latin translating. But the process always involves difficulty.
These days it is very easy to determine the tense, number, case, gender and meaning of individual words, with tools like QuickLatin or Whitaker’s Words, or other morphologisers.
Likewise the wide availability of dictionaries in PDF format makes it easier than ever to look up unusual words in specialised dictionaries. The next step is for these to emerge in an indexed electronic form. In fact a kind correspondent has sent me an interesting tool, in which you can search a wide range of dictionaries, where the start and end word of each page are stored electronically, and then you can display on-screen a bitmap of the page. Many problems require a flash of genius; and this is an example of it. I hope to write more about this in due course.
But none of this helps you with a phrase which simply won’t make sense, even when you know all the words. This is because you don’t know much about Latin syntax.
A lot of people know some Latin. A great number of people know enough Latin to do something with the tools above.
But most people do not know simply Latin constructions, like the accusative + infinitive phrase, even though there is even a Wikipedia page on it:
Iulia dicit, se bonam discipulam esse
Julia says, that she (se) is (esse) a good pupil (bonam discipulam)
Once you recognise the format, it’s not hard. You translate the accusative “se” as “that she”, and “she” becomes the subject of the new clause. You find the infinitive (“esse”, to be), and treat it as an indicative (“est”, is). After that, the rest of the clause is normal. There may be other words along for the ride, which don’t matter, as here with “bonam discipulum”.
I have found by experience that few people understand this construction.
My own knowledge of Latin constructions is limited. It wasn’t an important part of the Latin that I did at school.
Something that we all need to work upon some more. I shall dig out a textbook and have a read!