Schome Park Dictionary/Viewer
Decimus' user page | Decimus' talk page | Decimus' scripts | Decimus' script libraries | Decimus' projects
Main scripts page | Toggling Rotate script | UUID-getter scripts | Texture changer | Channel spier | Chatbot | Jump slab | Emailer | Fractal viewer | Grammar analyser | SPD viewer
About the SPD viewer script
This script is the viewer for the Schome Park Dictionary. This version will generate several LSL scripts which you simply need to stick into individual objects in SL (reasons explained later) and will handle any number of entries as long as you can make enough objects. You will notice several edits in the SPD page's history which say things along the lines of 'changing format so compiler can understand it'. These edits are just rearrangements of the layouts so that it is set out in a way that this script understands it.
The compiler script
This script itself is written in Python (to run it on your computer, you will need Python from here and to copy the entries from the SPD (you can copy the whole page starting on the first entry and it will be handled fine) into a file called 'spd.txt' in the same directory as the script)
# Open the input file inf = open("spd.txt", "r") # Term, type and definiton lists l1, l2, l3 = [], [], [] # Loop through the file for line in inf: # Check that the line has content on it if not line.strip(): continue # Find a dash, indicating the end of the terms e1 = line.find(' - ') # Check that this line is a definition line if e1 == -1: continue # Find the start of the type (eg, noun, verb, etc.) s2 = line.find('(', e1) # More checking to ensure we only parse definitions if s2 == -1: continue # Find the *end* of the type e2 = line.find(')', s2) # Final piece of checking for definition lines if e2 == -1: continue # List of each alternative alts = [] # Pick out the section that contains the terms s = line[:e1].strip() # Extract each alternative i = 0 # The initial 'last index' position is -1 # because it is increased by 1 before use il = -1 while i != -1: # Forward slashes separate alternatives i = s.find("/", i+1) # Ensure that all letters are included if i == -1: idx = e1 else: idx = i # Extract the portion containing this alternative st = s[il+1:idx] # If this alternative is in the form of 'a, b', # rearrange it to say 'b a' idx = st.find(", ") if (idx != -1): st = st[idx+2:] + " " + st[:idx] # Add this alternative to the list alts.append(st) # Set the 'last index' position to the current index il = i # Append each term to the lists for itm in alts: l1.append(itm.lower()) l2.append(line[s2+1:e2].strip()) l3.append(line[e2+1:].strip()) # Close the input file inf.close() # Due to the 16kb memory limit on LSL scripts # and the 72-item compile-time list length limit, # place each 72-entry block in a separate file idx = 0 while 1: # Get the number of entries left, up to 72 n = min(72, len(l1)) # If no entries are left, exit the loop if not n: break # Open the next output file outf = open("spd_compiled%d.txt" % (idx+1), "w") # Output a header outf.write("integer INTERCOM_CHANNEL = -10;\n\n") # Construct the list of terms outf.write("list termlist = [") for i in range(n): # If this is not the last entry, simply add the entry. if (i != n-1): outf.write('"%s",\n' % l1[i].replace('"', '\\"')) # If it *is* the last entry, add the entry and close the list else: outf.write('"%s"];\n' % l1[i].replace('"', '\\"')) # Construct the list of types outf.write("\nlist typlist = [") for i in range(n): # If this is not the last entry, simply add the entry. if i != n-1: outf.write('"%s",\n' % l2[i].replace('"', '\\"')) # If it *is* the last entry, add the entry and close the list else: outf.write('"%s"];\n' % l2[i].replace('"', '\\"')) # Construct the list of definitions outf.write("\nlist deflist = [") for i in range(n): # If this is not the last entry, simply add the entry. if i != n-1: outf.write('"%s",\n' % l3[i].replace('"', '\\"')) # If it *is* the last entry, add the entry and close the list else: outf.write('"%s"];\n\n' % l3[i].replace('"', '\\"')) # Output the portion of the script which takes and handles requests outf.write("default\n{\n state_entry()\n {\n" " llListen(INTERCOM_CHANNEL, \"SPD co-ordinator\", " "\"\", \"\");\n }\n\n" "listen(integer channel, string name, key id, string msg)\n" " {\n string m = \"\";\n integer n;\n" " n = llListFindList(termlist, [llToLower(msg)]);\n" " if (n == -1)\n" " m = \"!\";\n" " else\n" " m = msg + \" - (\" + llList2String(typlist, n) + " "\") \" + llList2String(deflist, n);\n" " llWhisper(INTERCOM_CHANNEL, m);\n }\n}") # Increment the 'file index' idx += 1 # Remove all the items we have just put into lists l1, l2, l3 = l1[n:], l2[n:], l3[n:] # Close the output file outf.close() # Write a file which takes the requests, # routes them off to each of the other scripts, # listens for the replies from them and tells the user # what the definition is (or if there isn't one) out = open("spd_compiled0.txt", "w") out.write("""integer n = 0; integer num = %d; integer flag = 0; string lookup = ""; integer IN_CHANNEL = 10; integer INTERCOM_CHANNEL = -10; default { state_entry() { llListen(IN_CHANNEL, "", "", "");""" % idx) for i in range(1, idx+1): out.write("\n llListen(INTERCOM_CHANNEL, \"SPD part %d\"," "\"\", \"\");" % i) out.write(""" } listen(integer channel, string name, key id, string msg) { if (channel == IN_CHANNEL) { if (! flag) { llWhisper(INTERCOM_CHANNEL, msg); lookup = msg; flag = 1; n = 0; } else llSay(0, "Currently parsing another request; please wait"); } else { if (msg != "!") { llSay(0, msg); flag = 0; } else { n++; if (n == num) { llSay(0, "The term '" + lookup + "' is not in the SPD!"); flag = 0; } } } } }""")
How to use
To use the output scripts, you need to copy each script (from spd_compiledn.txt) into a separate object. This is necessary because each single object can only have one listener script, whereas all the scripts here use listens. The only implemented command is:
"/10 x" - look up x in the dictionary (case-insensitive). It will either whisper the definition or "The term ' x ' is not in the SPD".
Internal workings
The generator script works simply by reading and splitting all the definition lines in the input, then outputting several scripts, with the definitions added into a simple template.
The outputted script in spd_compiled0.txt is the co-ordinator script. This, simply put, takes the requests and sends them out to the others (on channel -10), which then return replies to it via the same channel. This is necessary because otherwise you would have several replies per request.
The other outputted scripts simply wait for the co-ordinator to ask them for a reply, then search their lists of terms. If they don't find the term, they say '!' on channel -10; if they do, they fetch the whole definition and say that on said channel; the co-ordinator simply counts the number of replies to ensure that all have responded with '!'s or one has replied with something else (if the latter, it simply echoes that to the normal chat channel; if the former, it tells you that the term is not in the SPD)
And a specific naming regime is required! spd_compiled0.txt must be placed in an object called 'SPD co-ordinator' (case sensitive) and the others must be placed in 'SPD part n ', where n is the number at the end (so 'spd_compiled1.txt' goes in 'SPD part 1' and so on), also case sensitive.