There's always time to play

Thursday, September 30, 2010

Updating Ubuntu without removing grub-efi

Ubuntu still is trying to remove grub-efi everytime a new kernel arrives. I have a Mac Mini without a display, so grub-pc is useless for me, so how do I prevent this grub-efi removal all the time?

Simple solution, just tell apt you also want to install grub-efi, regardless of the availability of a new version:
$ sudo apt-get install linux-generic-pae grub-efi
Reading package lists... Done
Building dependency tree
Reading state information... Done
grub-efi is already the newest version.
The following extra packages will be installed:
linux-image-2.6.35-22-generic-pae linux-image-generic-pae
Suggested packages:
fdutils linux-doc-2.6.35 linux-source-2.6.35 linux-tools
Recommended packages:
grub-pc grub lilo
The following NEW packages will be installed:
The following packages will be upgraded:
linux-generic-pae linux-image-generic-pae
2 upgraded, 1 newly installed, 0 to remove and 3 not upgraded.
Need to get 34.1MB of archives.
After this operation, 107MB of additional disk space will be used.
Do you want to continue [Y/n]?

And voila, grub-pc is a suggested package and no longer forced upon me! Thanks go out to Frank Groeneveld for suggesting the solution!

Installing Samba 4 on Ubuntu Maverick (10.10)

Samba 4 is currently able to serve as a active directory domain controller for both Windows XP and Windows 7 (as tested by me) and probably for other Windows versions too. With Ubuntu 10.10 there finally is a recent enough version to make use of all the current Samba 4 functionality, however some issues still remain. This post will provide a short guide to setting up Samba 4 on your Ubuntu Maverick system, but it won't go into more advanced Samba topics. At first I wanted this to be a full step-by-step guide, however I can't find the time to complete it as such (I started writing when Maverick was in beta). I welcome comments adding more details and I hope everyone will be able to follow this howto.

Let's start by updating the system.
$ sudo apt-get update

Next add a PPA which includes a more recent Bind 9 version. I believe this is mainly needed so your Windows clients can send DNS updates to the domain controller, but I can't say I thoroughly tested with the Ubuntu Maverick distributed version.

Personally I used bind9 from Hauke Lampe's PPA (BIND 9 Updates : Hauke Lampe).

Install samba4 and bind9:
$ sudo apt-get install samba4 samba4-clients bind9

Move existing smb.conf:
$ sudo mv /etc/samba/smb.conf{,.old}

Create a samba 4 config and provision the database:
$ sudo LD_PRELOAD=/usr/lib/ /usr/share/samba/setup/provision --domain=SAMDOM --adminpass=SOMEPASSWORD --server-role='domain controller'

You might be wondering what this LD_PRELOAD is about, well it's needed because some stuff is missing the link to the dcerpc library.

Now we want to start samba, there's another issue ahead. The samba4 init script doesn't check for the existence of the samba directory in /var/run, so let's add that ourselves.
# /etc/init.d/samba4
log_daemon_msg "Starting Samba 4 daemon" "samba"

if [ ! -d $(dirname $SAMBAPID) ]; then
mkdir -p $(dirname $SAMBAPID)

if !...

We're still not there yet... Remember the missing library link? It will also return while running Samba, so let's work around it by creating local versions of the samba programs that will load the library:

Create /usr/local/sbin/samba:
LD_PRELOAD=/usr/lib/ /usr/sbin/$(basename $0)

Now symlink samba_dnsupdate and samba_spnupdate to the same file:
$ sudo ln -s /usr/local/sbin/samba{,_dnsupdate}
$ sudo ln -s /usr/local/sbin/samba{,_spnupdate}

Now start samba:
$ sudo /etc/init.d/samba4 start

Let's do a quick test if it's working:
$ smbclient -UAdministrator -Llocalhost
Password for [SAMDOM\Administrator]:

Sharename Type Comment
--------- ---- -------
netlogon Disk
sysvol Disk
IPC$ IPC IPC Service (Samba 4.0.0alpha12-GIT-UNKNOWN)
ADMIN$ Disk DISK Service (Samba 4.0.0alpha12-GIT-UNKNOWN)
REWRITE: list servers not implemented

Seems to be working!

Now let's get DNS working too. Start by editing named.conf.local:
// /etc/bind/named.conf.local
//include "/etc/bind/zones.rfc1918";

include "/var/lib/samba/private/named.conf";

Thought we were done? Think again! AppArmor is protecting our samba4 files from bind, I'd rather have bind read them though...
# /etc/apparmor.d/usr.sbin.named
/var/lib/samba/private/* rw,
/var/lib/samba/private/dns/* rw,


Reload AppArmor profiles and restart bind:
$ sudo /etc/init.d/apparmor reload
$ sudo /etc/init.d/bind9 restart

Bind should now start without any issues. Next is to actually use bind for DNS:
# /etc/resolv.conf

You can verify it's working by querying dns for kerberos:
$ host -t SRV

This should return an SRV record, if not, something's broken!

Now let's move the Kerberos config into place:
$ sudo cp /var/lib/samba/private/krb5.conf /etc/

You can verify it's working by installing krb5-user and doing a kinit Administrator, but since Kerberos comes out of the box with samba, I'm assuming it's working (it always did for me).

If you chose to add a PPA with a recent Bind version, you can enable Kerberized DNS updates by pointing named to the correct principal and keytab. More details on this can be found on the Samba 4 howto, I will add my own details here later.

You should now be able to administer your Samba 4 domain controller using the microsoft utilities for windows server management, the Samba net tool or direct LDAP queries.


dec 8 2010, 22:56

Added missing apparmor policy changes

Monday, September 6, 2010

Rsync and remote sudo

Running rsync with superuser privileges can be hard at times, but here's an easy solution works on Ubuntu 10.04 (some other solutions failed to work):
$  echo "password" | ssh sudo -S -v
$ sudo rsync -a -e ssh --rsync-path="sudo rsync"

The first line will touch the timestamp for sudo, the second line will really sync. Keep in mind that this doesn't take care of credentials for ssh, so you will need to take care of this using keys, agents or some external authentication mechanism like Kerberos.

Thursday, August 26, 2010

rsync with --delete-excluded

While setting up daily (offsite) automated backups I ran into a few issues. First of all backups didn't complete before people were getting to work again, so I had to manually stop them and start them at a lower transferrate. This is easily done by passing rsync the --bwlimit=<kbps> option.

Next I often want to sync just part of the tree, so I would add --exclude=/<folder> to the options to exclude all folders I don't want. However, I also exclude some files and I use --delete, which has the nasty side-effect of not deleting the excluded files on the receiving end (if they were deleted on the sender), thus leaving non-empty folders on the receiver and generating errors because the non-empty folders aren't deleted. There's an option that 'fixes' this, and that's --delete-excluded. This option will delete excluded files on the receiving end. You can guess that combined with my --exclude=/<folder> this would result in deleting an entire branch of the tree that should not be removed... The solution is to specify that the exclude is a receiving side exclude, because excludes are server side exclude by default when --delete-excluded is also provided. This can be done by using a filter rule instead of an exclude rule, resulting in the following option: --filter=-r_/<folder>. The - is to specify it's an exclude, the r specifies it's for the receiving side and the _ seperates the modifiers from the path (space is also allowed, but using an underscore prevents the need for quoting or even double-quoting). Now there's one nasty issue remaining: the excluded folder will still be parsed on the sender, so let's make it an exclude for both sender and receiver: --filter=-rs_/<folder>.

Using the above it's now possible to exclude files from an rsync transfer, without removing them on the receiving side, but with deletion of exclude files on the receiving end. In short: rsync --exclude='*.tmp' --filter='-rs_/important/' --delete --delete-excluded <source> <dest> will leave the important folder alone on the destination, but will remove all .tmp files in the destination.

Monday, June 21, 2010

OpenLDAP default search base

Although it's possible to specify a search base on the client when doing an ldapsearch, it's often nicer if the server can have it set correctly already. I noticed there's an olcDefaultSearchBase attribute for olcDatabase entries, however you can only use it on entry -1, the frontend database. This makes sense, because for one LDAP server instance you can only have a single default search base.

The following LDIF will set the default search base to dc=denc,dc=nl:
dn: olcDatabase={-1}frontend,cn=config
changetype: modify
add: olcDefaultSearchBase
olcDefaultSearchBase: dc=denc,dc=nl

Works like a charm for me!

Thursday, June 17, 2010

Recovering from glue objects in OpenLDAP

After some syncing issues and a few transfers of /var/lib/ldap between servers, our company LDAP database had lost it's root organization entry. Doing a slapcat resulted in the entry listed with objectClass glue and all of it's attributes gone. However, this was the same at all of our servers.

The first thing that came to mind to fix this issue was doing an ldapmodify on the entry, however ldapmodify would return ldap_modify: No such object (32). The logical next step would then be to add the object, since ldapmodify complains it's not there... However, that would result in ldap_add: Already exists (68)! Amazing, one program telling me the object can't be modified because it's not there, the other telling me I can't add it because it exists.

I did some searching, but couldn't find a proper solution or anyone with a similar issue. I could of course start from scratch, but that would destroy the sync status, modified timestamp, modifier's name, create timestamp and creators name and perhaps even more, so that wouldn't really be an option in my humble opinion.

During my (re)search I did come across slapadd. slapadd can be used to do offline database edits (at least additions to the database). So I stopped slapd, and fired up slapadd and entered my LDIF... Same issue! The entry exists, so it can't be added. slapadd doesn't seem to support modify either (I'm not complaining, just stating the facts), so I had to figure out something else...

Suddenly I had it all figured out. slapadd and slapcat are similar tools in that they operate directly on the database instead of talking to slapd. Thus if you slapcat your database you can give the output back to slapadd!
# slapcat -n 1 > entries.ldif
# slapadd -n 1 -l entries.ldif

Of course this very simple code example will result in similar errors, because all your entries are already there. Besides, it would also be nice to edit the broken entry while we're at it, which will result in the following list of commands to complete it all (code assumes broken tree is database number 1, replace with your database index if it's not the first database):
  1. # cp -ar /var/lib/ldap{,.bak}
  2. # slapcat -n 1 > entries.ldif
  3. # rm -r /var/lib/ldap
  4. # mkdir -p /var/lib/ldap/bdb
    This line assumes a BDB database, you can probably replace bdb with hdb if you're using HDB
  5. Now edit entries.ldif so your entry makes sense again. Just fix the objectClass (be sure to create a correct objectClass chain, i.e. top, dcObject, organization), structuralObjectClass and attributes required by the newly set objectClasses (i.e. dc, o).
  6. # slapadd -n 1 -l entries.ldif

Now your entry should be back again, with a proper objectClass and related attributes. If you get errors along the way, make sure there aren't more entries with attributes that aren't available in the schema files. Just remove the incorrect attributes (and probably incorrect objectClasses accompanying the attributes) from the LDIF and repeat the database delete and add steps (or remove everything earlier in the LDIF and just add the new entries using slapadd, of course!)

The last step would be to index the database. I don't know if it's required (slapd will run fine without), but before starting slapd run the following:
# slapindex -n 1

Now your LDAP tree should be back to a proper state again!

There's just one issue left... If you didn't change contextCSN attributes, slapd won't sync the entry to other servers because they will all think the entry never changed (and thus the other servers will keep the broken entry). There's an easy solution: just use ldapmodify to change an attribute and the contextCSN will update and the change will propagate to the other servers. The real fix would be to change the contextCSN for the rid of the server you're editing to the current time, however this is more prone to mistakes and the result should be the same (unless using delta syncrepl, where it is possible that only the change will get propagated.)

This was my not-so-short introduction to LDAP disaster recovery without losing contextual information. I'm hoping you enjoyed reading this post and that it helped you to recover from long-standing errors.

Wednesday, June 16, 2010

Kerberos SSH logins on Mac OS X

As a testing step of our Kerberos / Mac OS X integration I was testing SSH using a Kerberos ticket. At first it didn't seem to work. However, SSH can easily provide some more detailed debugging information, which I could compare with debugging information from a Linux machine which would successfully login with a Kerberos ticket. Turned out GSSAPI authentication is disabled by default for SSH on Mac OS X, you can enable it by editing /etc/ssh_config:
Host *
GSSAPIAuthentication yes

or by passing the option to SSH on every connection:
$ ssh -o GSSAPIAuthentication=yes <host>

Thursday, June 3, 2010

Mac OS X and OpenLDAP

At work we had some issues trying to join Mac OS X machines into our Samba Windows domain. Turned out Mac OS X was doing a search with scope base and empty base, which is meant to return some information that can be used for compatibility or some global knowledge about the LDAP tree. This object is the RootDSE object. In our case that search would return nothing, instead of the descriptive entry.

After quite a while we noticed closed bug #427842 on Launchpad. The bug describes some missing access control rules that can lead to this problem. Although this bug is closed, it can still show up when migrating data from an older release, which was also the case for us. The bug also has the required ldif, which I'll copy here for future reference:
dn: olcDatabase={-1}frontend,cn=config
changetype: modify
add: olcAccess
olcAccess: to dn.base="" by * read
olcAccess: to dn.base="cn=subschema" by * read

You can feed this to ldapmodify or ldapadd (yes, ldapadd can also do modifies). A quick ldapsearch will reveal if it worked:
$ ldapsearch -x -b '' -s base

This should return an object of the OpenLDAPRootDSE objectClass (and empty distinguished name).

Now we're at it, let's add another useful gem for Mac OS X: altServer attributes. Mac OS X searches for altServer attributes in order to find other servers that should provide the same data, in case the server is down (although I don't know when this data is cached).

It's possible to add attributes to the OpenLDAPRootDSE object by creating an LDIF file and pointing the olcRootDSE attribute on the config object to the created LDIF file. Create the following file, place it at /etc/ldap/rootdse.ldif:
altServer: ldap://server2.domain.tld/dc=domain,dc=tld
altServer: ldap://server3.domain.tld/dc=domain,dc=tld

Now add the following LDIF to OpenLDAP:
dn: cn=config
changetype: modify
add: olcRootDSE
olcRootDSE: /etc/ldap/rootdse.ldif

You can add this one using ldapmodify again.

Another quick ldapsearch will verify the attributes are really there:
$ ldapsearch -x -b '' -s base "+"

This should present quite a list detailing some support, including the just added altServer attributes.

Now there's one last thing that we should add to offer our Mac OS X users (or better, ourselves as sys admins!) a more pleasant experience: an Avahi (bonjour/zeroconf) entry for our OpenLDAP server. This will make the server show up as an option in some dialogs, for instance when adding an LDAPv3 directory server for authentication or contacts. To do this, add the following service file to avahi, for instance as /etc/avahi/services/slapd.service:
<?xml version="1.0" standalone='no'?><!--*-nxml-*-->
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">

<name replace-wildcards="yes">%h</name>

The only additional step to integrating OpenLDAP even more with Mac OS X would be by adding the Apple schemas and providing OpenDirectory support using OpenLDAP on Linux. I'll probably come to that later, but one thing I'll definitely post about is authentication against our existing OpenLDAP user tree.

Tuesday, April 6, 2010

Installing Alfresco 3.3 on Ubuntu Lucid Lynx LTS (10.04)

I happily reused my previous installing Alfresco post to provide you with a new post detailing the setup of the forthcoming Alfresco release on the forthcoming Ubuntu release.

I'm still trying to figure a proper way to format content, but it should be readable at all times.

Note that lines starting with a # (in typable commands) mean they should be executed as root. There's many ways to do this, my advice would be to prepend the commands with sudo. I'm trying to visually distuingish everything you need to type yourself (as opposed to shell output or existing file contents), but I'm human so I will make mistakes every now and then. Quick tip: if you can't write a file from vim because you opened as non-root, use :w !sudo tee % to write the file using sudo.

Start with updating your system:
# apt-get update

Install Tomcat, MySQL and mysql-connector:
# apt-get install tomcat6 mysql-server libmysql-java

Edit /etc/default/tomcat6:
JAVA_OPTS="${JAVA_OPTS} -XX:+UseConcMarkSweepGC"
JAVA_OPTS="${JAVA_OPTS} -Xms512m -Xmx512m"

In contrary to in Ubuntu 9.04, the Tomcat security manager is disabled by default in 10.04. I guess this means that the security manager is more of a problem than a solution, so I'm already feeling better about not using it.

Create the Alfresco directory tree:
# mkdir /opt/alfresco
# cd /opt/alfresco
# wget
# tar xf alfresco-community-war-3.3.tar.gz

You can use something else instead of /opt, but it seems to me this is a desirable location.

I consider myself somewhat experienced with Alfresco, so I'm not downloading the sample extensions...

Create Alfresco database and user:
$ mysql -u root -p < extras/databases/mysql/db_setup.sql

Create Alfresco and Tomcat directories:
# mkdir -p /srv/alfresco/alf_data
# mkdir -p /var/lib/tomcat6/shared/classes

I'm using /srv as data root, I should also move the shared/classes to that location. In my previous guide I used /var/lib/tomcat6/shared/lib/ as base for additional JARs (in this case the mysql-connector), but the default config assumes that these JARs reside in /var/lib/tomcat6/shared/, so I'm not going to deviate from that assumption.

Add links to war files to tomcat webapps:
# ln -s /opt/alfresco/alfresco.war /var/lib/tomcat6/webapps/
# ln -s /opt/alfresco/share.war /var/lib/tomcat6/webapps/

Add mysql connector to path where tomcat finds it:
# ln -s /usr/share/java/mysql-connector-java.jar /var/lib/tomcat6/shared/

Setup Alfresco global settings:
# cp /opt/alfresco/extensions/extension/ /var/lib/tomcat6/shared/classes/

Edit the just copied file:

It seems that in some cases it's necessary to also include the hibernate dialect in this config file. You can do so by adding the following line:

Create the Alfresco extension root:
# mkdir -p /var/lib/tomcat6/shared/classes/alfresco/extension/

This directory is used to override alfresco configuration without changing the deployed WAR.

Setup logging in /var/lib/tomcat6/shared/classes/alfresco/extension/
log4j.rootLogger=error, File

log4j.appender.File.layout.ConversionPattern=%d{ABSOLUTE} %-5p [%c] %m%n

Make sure permissions are reasonable:
# chown -R tomcat6:tomcat6 /var/lib/tomcat6 /srv/alfresco

Restart Tomcat and enjoy!
# /etc/init.d/tomcat6 restart

Now you should be able to reach Alfresco on [ip]:8080/alfresco and Alfresco Share on [ip]:8080/share.

Sunday, April 4, 2010

Ubuntu Lucid on (X86) Mac Mini with EFI

I was gonna use my Mac Mini as replacement home server (uses less power than an idle Core 2 Duo + Geforce 8800GTS + 4 Harddisks), so I decided to put Ubuntu Lucid on it. Installation was really easy. I used the Lucid Lynx Beta 1 Server install CD, and installation went just fine. Press/hold c to boot from CD, be sure to create a seperate boot partition or keep some free space to create a seperate boot partition afterwards.

Just installing Ubuntu is no issue at all. You don't even need to create a seperate boot partition, because it'll boot just fine using the Mac's legacy booter. However if you're a Mac Mini owner and want to boot headless, there is only one solution (and a workaround that uses a 'dummy monitor dongle', which is not what I would want). This solution is making use of the EFI features of GRUB 2, which I'll detail in the rest of this post.

When the installation is done, install hfsplus and hfsprogs so you're able to create HFS+ volumes (you probably only need one of those, but it wasn't exactly clear for me which one to use and due to lack of time I haven't looked any further yet). Copy files from your boot partition to a temporary space, then unmount and format the boot partition as HFS+ (mine is /dev/sda2):
# mkfs.hfsplus -v boot /dev/sda2

Edit /etc/fstab to use the new boot partition as boot partition:
/dev/sda2 /boot hfsplus defaults 0 2

Unfortunately it seems UUID's can't be used for HFS+ volumes at the moment, so hardcode the device name in there.

After creating the volume mount it again and copy over all the files from your temporary space.

Now install grub-efi, this will automatically remove grub-pc. Generate a GRUB EFI executable using the following command:
# grub-mkimage -o /boot/grub/grub.efi -p /grub part_gpt hfsplus fat ext2 normal sh boot configfile linux

You actually don't even need the fat and ext2 modules and probably more can be stripped, but I haven't experimented with GRUB 2 a lot yet, that'll be for another day (and another post).

We're almost done, now it's on to getting the Mac to actually use our EFI enabled GRUB. First toggle the boot flag on your shiny HFS+ partition. Run parted (or any other GPT-aware partitioning tool) and type:
(parted) set 2 boot off

Parted will probably tell you the disk is in use and you need to reboot for the change to become effective, but it doesn't matter for us.

Now the final step is to tell the Mac (or actually, the filesystem) that our grub.efi is bootable so it'll show up in the Mac boot menu. There should be a utility call hfspbless, which allows you to do this from within Linux, however the first hit on Google doesn't seem to offer a quick guide, so I skipped this part. Instead, put in the Mac OS X install DVD. As soon as a menu bar shows up (I believe you have to click next at least once), fire up a terminal. In the terminal, enter the following:
# mkdir /Volumes/boot
# mount_hfs /dev/disk0s2 /Volumes/boot
# bless --folder=/Volumes/boot --file=/Volumes/boot/grub/grub.efi --label boot --setBoot

The bless command has now set some metadata on the HFS+ filesystem that the Mac uses to identify a native bootable image. I assumed label should set the label accordingly for the boot menu, however my entry showed up as 'EFI something' IIRC, but I can't care more or less since it's a server and I'll never see the menu anyway. Now reboot and enjoy!

This did the job for me, however there are a few issues I still have to take care of. The Mac created a fake MBR partition map for me, which I don't need and don't use. It now shows 'Windows' as option in the Mac boot menu, but luckily it starts Linux by default. Also, the MBR partition map is out of sync if you do stuff with the GPT partition table. I used rEFIt to resync the MBR table, but when I figure out how to remove the MBR that's what I'm going to do.

Also, there's a file called Volume Name Icon on my boot partition. I guess this is used for the Mac boot menu, so it probably can be changed easily too. However I have no clue what the format is, I'll have to look this up some day and change it for a genuine Tux!

Monday, February 1, 2010

Installing Alfresco on Ubuntu Jaunty (9.04)

Some of the information in here comes from Even though the guide is old, most of the information is still correct, albeit some file locations and names have changed and it's written for another distribution.

  1. Start with updating your system

    # apt-get update

  2. Install tomcat and mysql-connector

    # apt-get install tomcat6 libmysql-java

  3. Edit tomcat startup settings

    # /etc/defaults/tomcat6

    #JAVA_OPTS="-Djava.awt.headless=true -Xmx128M"
    JAVA_OPTS="$JAVA_OPTS -Xms512m -Xmx512m"


    Take note of the fact I disabled security, otherwise you need to create a policy file with everything that is allowed in it. I guess it is not that hard, but I wanted to get Alfresco running at all first.

  4. Create the Alfresco root

    # mkdir /opt/alfresco
    # cd /opt/alfresco
    # wget
    # wget
    # wget
    # unzip

    # mkdir wcm
    # unzip -d wcm

  5. Create Alfresco database and user

    $ mysql -u root -p < extras/databases/mysql/db_setup.sql

  6. Create Alfresco and tomcat directories

    # mkdir -p /var/lib/alfresco/alf_data/
    # mkdir -p /var/lib/tomcat6/shared/{classes,lib}

  7. Add Alfresco and Alfresco share wars to tomcat

    # ln -s /opt/alfresco/alfresco.war /var/lib/tomcat6/webapps/
    # ln -s /opt/alfresco/share.war /var/lib/tomcat6/webapps/

  8. Add mysql connector to path where tomcat finds it

    # ln -s /usr/share/java/mysql-connector-java-1.5.6.jar /var/lib/tomcat6/shared/lib/mysql-connector-java.jar

  9. Add extension sample files to tomcat

    # unzip -d /var/lib/tomcat6/shared/classes/

  10. Setup Alfresco global settings

    cp /opt/alfresco/extensions/extension/ /var/lib/tomcat6/shared/classes/
    Edit the contents of the file:
    # /var/lib/tomcat6/shared/classes/


  11. Add WCM bootstrap

    # cp wcm/wcm-bootstrap-context.xml /var/lib/tomcat6/shared/classes/alfresco/extension/

  12. Setup catalina loader paths

    # /var/lib/tomcat6/conf/

  13. Fix permissions

    # chown -R tomcat6:tomcat6 /var/lib/{tomcat6,alfresco}

  14. Do a first run so the wars get extracted

    # /etc/init.d/tomcat6 restart

  15. Setup log file

     # /var/lib/tomcat6/webapps/alfresco/WEB-INF/classes/


  16. Restart tomcat so log settings are re-read

    # /etc/init.d/tomcat6 restart

Now you should be able to reach Alfresco on [ip]:8080/alfresco and Alfresco Share on [ip]:8080/share.

Seems that Alfresco prefers to run on OpenJDK for me, getting out of memory errors when using Sun's JDK. However, if OpenJDK is installed on Jaunty, there's a symlink for rhino in /usr/lib/jvm/java-6-openjdk/jre/lib/rhino.jar, which prevents loading of rhino included in the Alfresco WAR and will result in errors in Alfresco Share (and probably other places too). A # rm /usr/lib/jvm/java-6-openjdk/jre/lib/rhino.jar fixes this.