Edit: For those of you looking for an answer to the question as stated the standard limits the number of nested loops at compile time. At runtime this is a different issue as the only limit would be the size of the program segment.
Solved: I was looking too early on in the build process. The c file gets further preprocessing applied to it. Off to the subsequent steps.
I have a problem with c code generated via perl from a language that applies rules for generating pronunciation. In essense the input is a huge dictionary of exceptions for pronounciation rules. The code is riddled with gotos and it worked up until one of the exception dictionaries reached 23K rules.
The code is basically unreadable but I have managed to make the c code compile after removing what appears to be the 6200th nested loop:
for (dictionionary1=seed1;dicitonary1<limit1;dictionary1++)
{
for (dictionionary2=seed2;dicitonary2<limit2;dictionary2++)
{
/* .... */
for (dictionionary6199=seed6199;dicitonary6199<limit6199;dictionary6199++)
{
/* two hundred more removed adding one makes it not compile */
}
}
}
Both gcc and xlC are able to handle these but aCC 3.73 (on H11.23 PA RISC) is puking.
Compiling /home/ojblass/exception_dictionary_a.c...
Loading the kernel...
Pid 18324 killed due to text modification or page I/O error
/bin/ksh: 28004 Bus error(coredump)
*** Error exit code 138
I have found this link and tried many of the suggested fixes without success.
For legacy reasons I have to compile for 32 bit (it uses a 32 bit library that I have no 64 bit counterpart for).
maxdsiz = 256 MB (x10000000) tried up to 4 GB
maxssiz = 16 MB (x1000000) tried up to 100MB
maxtsiz = 256 MB (x10000000) tried up to 1 GB
Any suggestions at compiler settings or a good link for documetation for aCC 3.73? I am drowning in search results.
I have coded a workaround breaking the dictionary into two parts resulting in dictionary_an.c and dictionary_az.c. I have had to touch some core logic I don't feel comfortable touching in order to pull this off and I was hoping to back down to the original configuration.
Wow - I know it's no help to you, but nesting 6199 levels deep is well in excess of what C or C++ require (15 for C90, 127 for C99 and 256 for C++).
What I'm curious about is how well that thing runs - if your dictionaries are of any size, the number of loop iterations must be astronomical. Say each dictionary size is 10: (10 ^ 6199) is quite a large number. Even if there are only 2 items per dictionary, (2 ^ 6199) is impressive as well.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With