Tag Archives: IT

Server Migration Guide

I’ve recently did a hardware migration for a 24/5 trading system. After a successful go-live and one week without any major incidents, I  think it would be a good topic to share. Beside the standard things (e.g. project management related topics) you have to do I would like to sum up some points which might me important/interesting to be considered.

Maybe a few words on the existing environment I’ve upgraded. It is basically a 24/5 trading system running on a Solaris 10 machine with an underlying Sparc architecture. The system actually has a lot of Java interfaces for the back-office processing and additionally proprietary interfaces for providing the rates. Beside the core machine, there are additional machines acting as secure gateways for internal and external access as well as web servers. Within the upgrade we switched to an x86 architecture as well as to newer Java, Apache and Tomcat versions. Based upon this setup we have a development an UAT and an production environment in palace.

Prerequisites 

As mentioned there are some prerequisite, which are company dependent. Mainly project documentation and project management methodology. Furthermore ensure a proper planning of resources and a certain buffers – as there are always unexpected issues. Also involve the business from the beginning, even if you are having just a “simple” upgrade.

A major point for the prerequisites is to always order the hardware in time. Major provider are taking some time to deliver the hardware. Neither the less you can order in different phases – starting with development, followed by UAT and finally the production. This might safe up some operational costs.

Make yourself a plan of software and versions which you would like to be running on. Request proprietary software in time – especially when you are switching on the underlying architecture (from SPARC to x86). Furthermore check the dependencies – certain software require certain versions of underlying tool or runtime environments (e.g. Java).

Track your activities

After having your prerequisites, you can actually start with the hands on work. In my case the operating system and the cluster software has been set-up by another team. Each single task or activity you do on the setup should be documented. This helps in three points:

  • It makes it easier to recover from issues/problems or at least to find out the possible root-cause.
  • Certain activities (e.g. firewall changes and configurations) will have to be repeated on the other environments – having that documented makes it straight forward and less playing around.
  • Having an activity list makes it easier to create a go-live plan.

Within my activity list – I have tracked the following points: activity, date of activity, dependency, responsible, status (new, in progress, done), environment as well as a comments/remarks.

Testing

Each project requires proper testing. But testing should not be only the task of the IT – it requires and full front-to-back test. From my point of view the following tests should be included:

  • Technical system tests: So basically testing the save start and stop of the application, a full failover, cluster and loadbalancer tests. Test the memory management and the performance of the application. Also you should test the compatibility of software versions – e.g. if you are planning to upgrade Java, there might be parts which might be affected somewhere.
  • Connectivy tests: Having a lot of interfaces, each one requires to transfer data from one system to another. To ensure that this will work, each connection (ingoing as well as outgoing) has to be tests.
  • Functional business tests: Even if the system version remains unchanged – it is important the the business does a complete test of the system – if everything behaves as it should. This should include (as far as possible) a full end to end test – meaning that the whole business process should be covered.

Only with an Ok from the business you should go forward for the next phase. Meaning after having a business ok for the development environment – we went forward to install the UAT environment. And only after a successful end-to-end testing on the UAT, we start setting up the new production environment.

Go-Live Planning

For the go-live plan it is utmost importance to have a plan and to do everything which was required for the already set-up and running systems. I usually note each single activity, the dependency to other activities, the responsible person, comments/remarks for special operations as well a complete-box, where the activity is marked as “done” after completion.

Also plan that you might need additional resources in areas which are not under your responsibility. This could be support from the Database team, Operating System Support or Network Administration. It is always good to be able to reach them in case of need.

For the go-live planning also think about the migration itself and a fallback scenario. If you have a web-service: are you going to switch the DNS name or IP addresses? Are you going to need new certificates? In my case we simply attached the new servers to existing load balancers. During the go-live there has been two major activities to do the actual migration. The first one was taking the old service from the load-balancer off-line, the second switching on the new service on the load balancer.

The Go-Live

If you are able to do a lot of the setup already in advance (or even having a parallel phase) – go for it. The Go-Live/Migration is usually anyway one of the busiest and tense situation of the whole upgrade project. Ensure that you strictly follow you go-live plan and work down step by step.

After having set up everything which is required for a startup – check everything again. Even if you had an extensive testing phase. From my point of view the following points should be considered:

  • System status: Operating system, cluster software, memory and hard-disk status.
  • Connectivity: Once again – check if all connections are working – you don’t want to have a trading system without rates.
  • Accessibility: If you have the ability to log-in/check the application – do it!

Communication is the key! If you upgraded/migrated successfully – inform your colleagues and the business. Follow the principle do good and let everybody know. But not only in regards to self-marketing, but also the ensure that everybody is aware and reactive if issues occurs on the first hours.

A side note: If you are having the migration during the night – also think about having the right amount of coffee and sweets!

Beyond the Go-Live

After the go-live plan some extended support. Unexpected issues might come up – and having that said, you should be able to deal with it. Also (but this might also be a point of the go-live planning) think about a fail-back scenario and make yourself a plan how to switch back.

Howto get BOINC running on a Linux Server

I’ve always been a fan of scientific projects, I’d also like to support and help. For a long time I am a supporter of the Seti@home project. Therefore I will set-up my server to do some processing, since it is idling most of the time.

So let us install the boinc client first. Under Ubuntu it is as simple:

root@jvr:~# apt-get install boinc boinc-client

Ok, now we have the client installed, but of course, it doesn’t start operating automatically. Actually technically wrong, the client runs as a server process, but no project is attached.

So to attach a project you can use the command line tool boinccmd [2], with the URL of the project and the account key:

root@jvr:~# boinccmd --project_attach https://setiathome.berkeley.edu/ d33ad5ca2e17af1d08c85268aabb4ae5

For a list of available BOINC projects, please check [3]. Since after a reboot the BOINC client will not remember the atttached projects we should add it permanentley.

Therefore I created in /etc/boinc-client/ a file called account_setiathome.berkeley.edu.xml  with the following content:

<account>
   <master_url>http://setiathome.berkeley.edu/</master_url> 
   <authenticator>232395_2af9483f6a12147ce849776db1a98ad2</authenticator> 
</account>

This setting I basically took from Account Key Settings page from the Seti@Home project, which can be found under [4].

 

[1] Seti@Home

[2] Boinccmd tool wiki

[3] BOINC Project List

[4] Seti@Home Account Keys

Howto set-up your own cloud with Seafile

Based on all the NSA sniffing and the recent article about who provides whom which information [1] I decided to set-up my own cloud on my private server. And actually – it was surprisingly easy! Searching around the internet seafile [2] seemed to be the most appropriate solution, since it is open-source, provides a nice web interface and actually has a client for all common operating system and devices.

So log in at the server – get root and download the server via wget:

root@jvr:~# wget https://bitbucket.org/haiwen/seafile/downloads/seafile-server_3.0.3_x86-64.tar.gz
--2014-05-18 16:26:06--  https://bitbucket.org/haiwen/seafile/downloads/seafile-server_3.0.3_x86-64.tar.gz
Resolving bitbucket.org... 131.103.20.168, 131.103.20.167
Connecting to bitbucket.org|131.103.20.168|:443... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: http://cdn.bitbucket.org/haiwen/seafile/downloads/seafile-server_3.0.3_x86-64.tar.gz [following]
--2014-05-18 16:26:07--  http://cdn.bitbucket.org/haiwen/seafile/downloads/seafile-server_3.0.3_x86-64.tar.gz
Resolving cdn.bitbucket.org... 54.230.13.87, 54.230.13.88, 54.230.13.118, ...
Connecting to cdn.bitbucket.org|54.230.13.87|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18399709 (18M) [application/x-tar]
Saving to: `seafile-server_3.0.3_x86-64.tar.gz'
100%[============================================================================================>] 18,399,709  51.0M/s   in 0.3s    
2014-05-18 16:26:07 (51.0 MB/s) - `seafile-server_3.0.3_x86-64.tar.gz' saved [18399709/18399709]
root@jvr:~#

Of course now we have to unzip the file:

root@jvr:~# tar xzf seafile-server_3.0.3_x86-64.tar.gz
root@jvr:~# cd seafile-server-3.0.3/

So just before we install, there are some packages which are required. For my system I needed to install the following additional packages:

root@jvr:~# apt-get install python python-setuptools python-simplejson python-imaging

If there is anything else missing, seafile will anyway note it during the installation, so no need to panic. So let’s get to the installation itself:

root@jvr:~/seafile-server-3.0.3# ./setup-seafile.sh

Follow the installation instructions – it should be quite straight forward. If you face any issue, the Seafile wiki [3] should be quite helpful. I installed the seafile server under /usr/share/ while I keep the data storage under /opt/seafile-data. If everything goes fine, the seafile server should be running with the following services under the listed ports:

port of ccnet server:         10001
port of seafile server:       12001
port of seafile httpserver:   8082
port of seahub:               8000

Please note that the sea hub service, which provides the web-end of the seafile server, needs to be started separately. 

root@jvr:/usr/share/seafile-server-3.0.3# ./seahub.sh

Ok, so far so good, everything should be up and running and you should be able to login via the web-interface on port 8000.

The next thing I’ve done was to create the links under /etc/init.d/ as follows and add both in the default run levels, so that the services fires up on a restart/start automatically:

root@jvr:/opt# cd /etc/init.d/
root@jvr:/etc/init.d# ln -s /usr/share/seafile-server-latest/seafile.sh .
root@jvr:/etc/init.d# ln -s /usr/share/seafile-server-latest/seahub.sh .
root@jvr:/etc/init.d# update-rc.d seafile.sh defaults
root@jvr:/etc/init.d# update-rc.d seahub.sh defaults

And now the tricky part. Since you might have noticed in my other blog entries [4],[5],[6] I am a bit security fanatic. Therefore I’d like to secure certain critical parts additionally. This time I’ll do this for the seafile web-service. So first I create an additional site within the apache configuration under /etc/apache2/sites-available/seafile with the following content:

<VirtualHost seafile.jvr.at:443>
       ServerName seafile.jvr.at 
       HostnameLookups Double                                             
       CustomLog /var/log/apache2/access.log combined env=!dontlog        
       SetEnvIf Request_URI "^/u" dontlog                                 
       ErrorLog /var/log/apache2/error.log                                
       Loglevel warn                                                      
       SSLEngine On                                                       
       SSLCertificateFile /etc/apache2/ssl/apache.pem                              
      <Proxy *>                                                          
          AuthUserFile /srv/seafile/.htpasswd                       
          AuthName EnterPassword                                     
          AuthType Basic                                              
          require user seafile_user                                        
          Order Deny,allow                                           
          Allow from all                                             
       </Proxy>                                                                   
       ProxyPass / http://localhost:8000/                                 
       ProxyPassReverse / http://localhost:8000/                         
</VirtualHost>

Now let’s create the htaccess file within the according directory:

root@jvr:~# mkdir /srv/seafile
root@jvr:~# cd /srv/seafile
root@jvr:/srv/seafile# htpasswd -cm /srv/seafile/.htpasswd seafile_user

Link the apache site to the sites-enabled and reload the apache service:

root@jvr:~# cd /etc/apache2/sites-enabled/
root@jvr:/etc/apache2/sites-enabled# ln -s ../sites-available/seafile .
root@jvr:/etc/apache2/sites-enabled# /etc/init.d/apache2 reload
* Reloading web server config apache2 [ OK ] 
root@jvr:/etc/apache2/sites-enabled#

And of course, disable the external access to the port 8000 on your firewall. Your web service should be now accessible with an extended htaccess security. Side note – since within certain companies certain ports are locked, it additionally enables you to access the service via https.

[1] Gizmodo.com, Which Tech Companies Protect Your Data From the Government?

[2] Seafile.com, Next-generation Open Source Cloud Storage

[3] github.com, Seafile: Deploy/Upgrade Seafile Server

[4] jvr.at, Basic Security for Linux Hosts 

[5] jvr.at, Book Review: The Cuckoo’s Egg

[6] jvr.at, Anonymous SSH over Tor and disconnect without a trace 

 

Basic security for Linux hosts

After reading Cuckoo’s egg from Clifford Stoll [1] I got a bit unsure if my Linux server is basically set-up secure enough. Even if the story about the hacker is quite old, it is neither the less highlighting the importance for security and to be careful enough when connecting a machine to the net.

Additionally having some history and experience in Security, I decided to have a closer look on my Linux server to double-ensure security.

1.) Passwords

First of all – and the issue of many problems – passwords. So let’s create a password which has no relation to the user, the content or the server itself. Passwords should have a certain length, numbers, lower and upper-case characters – and at least a special character. If your brain is unable to generate such a password, you can use the pwgen command under Linux.

root@lvps5-35-244-75:~# pwgen -y 12

Since we now have created a secure password, we should limit our remote access to certain users. In addition we should disable remote access for the privileged root account, since to whatever reason somebody might be able to log in as root, there would be no more limitations or boundaries to change, modify or destroy our system. Therefore simply edit the following line in the /etc/ssh/sshd_config:

PermitRootLogin no

Afterwards, do a simple restart of the sshd to reload the configuration.

/etc/init.d/sshd restart

In addition, since most of our system might have several accounts – you should question yourself if all of them require ssh access.

 2.) Automatic Security Updates

To enable automatic security related updates under Ubuntu you should install the unattended-upgrades package.

apt-get install unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades

Further details in regards to the unattended upgrades and specifics can be found under [3]. Personal note: As for my installation, the unattended-upgrade was not doing the upgrades automatically, I simply added the command into my crontab to be fired up each day at 03:00.  To change your crontab use:

crontab -e

and add the following line

0 3 * * * /usr/bin/unattended-upgrade

In relation to this topic, also quite helpful I would see the apticron package, which should automatically inform you about package updates.

apt-get install apticron
vim /etc/apticron/apticron.conf

 

3.) Disable external root access

Also one of the basic security todos after a server set-up should be the disabling of the remote root ssh login. This can be easily done by changing the following parameter in /etc/ssh/sshd_config:

PermitRootLogin  no

Please note that a change onthe sshd requires a restart of the service, which can be done via:

/etc/init.d/sshd restart

4.) Take a look beyond the walls: Check for additional services

I see it as quite helpful to do an external scan of which services are available. This can be done quite easy and straightforward via nmap. So let’s install it and do a quick scan:

root@abc:~# apt-get install nmap
root@abc:~# nmap -f xyz.com
Starting Nmap 5.00 ( http://nmap.org ) at 2013-11-06 23:16 CET
 Interesting ports on jvr.at (5.35.244.75):
 Not shown: 985 closed ports
 PORT     STATE SERVICE
 21/tcp   open  ftp
 22/tcp   open  ssh
 25/tcp   open  smtp
 53/tcp   open  domain
 80/tcp   open  http
 106/tcp  open  pop3pw
 110/tcp  open  pop3
 143/tcp  open  imap
 443/tcp  open  https
 465/tcp  open  smtps
 587/tcp  open  submission
 993/tcp  open  imaps
 995/tcp  open  pop3s
 3306/tcp open  mysql
 8443/tcp open  https-alt

Of course there are a lot more of security related tipps & tricks, but I thought this might be a starting point. Another starting point, which I find quite useful is [2].

 

[1] Clifford Stoll, CUCKOO’S EGG

[2] Ravi Saive, 25 Hardening Security Tips for Linux

[3] Ubuntu Help, Automatic Security Updates

Extract a XML tag with Oracle SQL

Have you ever been struggling with an Oracle Database which stores XML values within a VARCHAR2 or CLOB field? Needed to do an evaluation where one specific tag was required?

I had to deal with this problem recently and it turned out not to be as hard as expected. Oracle has build-in XML functionality, which enables you to do some magic.

So for my query, I first needed to convert the CLOB or VARCHAR2 element [1] into an XML – this can be done via XMLTYPE [2].

Having now a XML object, it’s more or less easy to extract a tag with the “extractValue” function. Simply give the XML-object and the search pattern for the tag as argument. For the search pattern you start with a backslash followed by the parent element, backslash child element and so on. Personal note: never put a backslash on the end. In [2] further details in regards to the XPath Construct are given. The return type is always a VARCHAR2.

In my specific case I build the following SQL, which extract all deals from 2013 are extracted and their distinct SOURCE_ID and their count values are shown.

select distinct SOURCE_ID, count(SOURCE_ID)
from (
     select extractValue(XMLTYPE(ORIGINAL_TRADE), '/deal/source_id') "SOURCE_ID"
     from DEALS. ARCHIVE where TIMESTAMP like '2013%'
) group by SOURCE_ID order by SOURCE_ID;

As a general starting point for Oracle SQL topics I can recommend [4].

[1] Oracle, Data Types

[2] Oracle, XMLType

[3] Oracle, ExtractValue

[4] psoug.org

Book Review: The Cuckoo’s Egg

Clifford Stoll’s The Cockoo’s Egg [1] was written in 1989 and is based on a real-life hacker story.

The story itself starts with the introduction how Clifford, as an astronomer, who got captured by the mainframes of Lawrence Berkeley Lab as an administrator.

On of his first tasks was to figure out a glitch in the accounting system which resulted into a 75cent difference. At this point Clifford didn’t know where this would lead him. Taking the login-time as a basis for the accounting system, it turned out that there is a former user active, who basically has moved to England some time ago. Neither the less the user is active and seems to be logged into their system locally. His username is Sventek.

Beeing suspicious that it might be a hacker, Clifford starts to monitor Sventek’s activities and soon it turns out that he is right. Equipped with computers, teletypes and printers which he has borrowed from different departments, Cliff watches every keystroke hit by “Sventek”. He monitored the  Tymnet [2] connection, where the hacker usually connects. Tymnet was basically an international network connecting the major cities. The big advantage was that the university had only 5 Tymnet connections, therefore it required less resources to monitor – but still he need to “borrow” the equipment. Beeping twice as soon as somebody logs into the systems, Cliff was unable to have a good sleep under his desk, as some people check their mails at night as well. Neither the less, the hacker has logged on and left a trace on the typewriter. Based on this Cliff was able to find out how the hacker became superuser – via a cuckoo’s egg. Via a bug in the GNU-Emacs Editor [3], Sventek was able to replace the atrun job scheduler [4] with an own version. As soon as the new atrun fires up, it enabled to become the superuser rights – that’s the cuckoo’s egg. Having this concrete proofs, Cliff tried to approach the FBI, CIA, NSA, and other agencies, everybody was interested but nobody felt responsible nor saw the need to react.

Watching the hacker from day to day breaking into foreign systems, Cliff usually tries to contact the local system administrators to set certain actions like resetting the passwords and/or updating the system. Over the time he gets more and more frustrated and furthermore his boss makes pressure to close up the shop.

Back and forth, Cliff was able to trace the Hacker to the German Datex Network, but for a further trace a search warrant was required. Starting to get an international case, certainly the FBI and the CIA got more interested into it. Finally they managed to receive the search warrant. The only open problem was now to keep the hacker long enough on the line to complete the trace. In one of the discussions between Cliff and his girlfriend, the operation Showerhead was born to overcome this problem. The idea was simple: create files, which should interest the hacker. And how to do that? Take any kind of scientific or research documents and replace Mr. with General, Professors with Sergeant and Major and add some “spicy” words. Furthermore they created a project name called SDINET and made up some mail traffic in those regards. In one of their mails the also noted that if further information regards the access to SDINET was required they should contact the secretary via the postal way. Not having a thought that somebody would actually really apply, a letter was received and immediately confiscated by the FBI.

In addition the hacker started downloading all those made-up files, enabling Cliff to call Tymnet and start the trace. Finally the FBI received the number, but did not share it with Cliff. Never being told who’s the real hacker, he at least got the information that they searched his home, and recovered all his equipment. A few weeks later, the story was on the news. Hackers closely related to the CCC [5] has been involved, but finally arrested.

Cliff, finally returned to his job as an astronomer and got married.

The other side of the story was a part of a German movie called: “23 – Nichts ist so wie es schein” [6] which I can highly recommend to watch (for those who understand German).

[1] Clifford Stoll, The Cuckoo’s Egg

[2] Wikipedia, Tymnet

[3] GNU Emacs

[4] unix.com, atrun manpage

[5] Chaos Computer Club

[6] Wikipedia.de, 23 Nichts ist so wie es scheint

Ajaxterm – ssh access via the web-browser

Quite often I am trying to access my Linux box remotely. Unfortunately most of the time for security reasons port 22 (SSH) is closed, leaving you disconnected from your home. Facing this issue, combined with my recent idea to get back to software development, its time to remove those boundary – lets install ajaxterm to get connected again.

Ajaxterm is a Python-based software using AJAX Javascript at the client side to provide an ssh terminal within a web-browser. Combining it with Apache’s Authentication it should be quite safe as well.

So lets start – first of all I think it is quite clear that you need an external accessible IP address as well as a web-server – e.g. Apache.  Using my own domain I then created a sub-domain pointing at the same IP address as my main server. I simply use the sub-domain as a structural way accessing various services. Having a Ubuntu System, the first thing now after updating the environment is getting the ajaxterm installed by the following command:

root@jvr.at:/home/jvr# apt-get install ajaxterm

Now we should enable the Password Authentication in /etc/ssh/ssh_config by simply uncommenting the line:

PasswordAuthentication yes

The next step is to create a login/password on the Apache Authentication level by following commands (please replace “MyName” with the preferred user name and please don’t use any kind of simple passwords):

root@jvr.at:/home/jvr# mkdir /srv/ajaxterm
root@jvr.at:/home/jvr# cd /srv/ajaxterm
root@jvr.at:/srv/ajaxterm# htpasswd -cm /srv/ajaxterm/.htpasswd MyName

Okay – following a structured approach, lets create now a separate Apache configuration file for the ajaxterm: /etc/apache2/sites-available/ajaxterm with the following content:

<VirtualHost ajaxterm.jvr.at:443>
                      ServerName ajaxterm.jvr.at
                       HostnameLookups Double
                       CustomLog /var/log/apache2/access.log combined env=!dontlog
                       SetEnvIf Request_URI "^/u" dontlog
                       ErrorLog /var/log/apache2/error.log
                       Loglevel warn
                       SSLEngine On
                       SSLCertificateFile /etc/apache2/ssl/apache.pem
                     <Proxy *>
                                 AuthUserFile /srv/ajaxterm/.htpasswd
                                 AuthName EnterPassword
                                 AuthType Basic
                                 require user MyUser
                                 Order Deny,allow
                                 Allow from all
                       </Proxy>
                       ProxyPass / http://localhost:8022/
                       ProxyPassReverse / http://localhost:8022/
  </VirtualHost>

So please note that the config is based on the newly created sub-domain. Furthermore we are using SSL but also, following the “require user”  line just enabling a defined user, named MyUser, to access the ajaxterm. Since the ajaxterm is basically a local running service, we have to set up a proxy.

But wait – having said before that we use SSL – I guess we will need to install and create an SSL certificate first. Therefore follow the following commands:

root@jvr.at:/srv/ajaxterm# apt-get install ssl-cert
root@jvr.at:/srv/ajaxterm# mkdir /etc/apache2/ssl
root@jvr.at:/srv/ajaxterm# /usr/sbin/make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /etc/apache2/ssl/apache.pem

And finally enable the proxy, ssl and the newly created ajaxterm config file.

root@jvr.at:/srv/ajaxterm# a2enmod proxy_http
Considering dependency proxy for proxy_http:
Enabling module proxy.
Enabling module proxy_http.
Run '/etc/init.d/apache2 restart' to activate new configuration!
root@jvr.at:/srv/ajaxterm# a2enmod ssl
Module ssl already enabled
root@jvr.at:/srv/ajaxterm# a2ensite ajaxterm
Enabling site ajaxterm.
Run '/etc/init.d/apache2 reload' to activate new configuration!

Finally, just to be on the save side – we should restart the ajaxterm and the apache2 service by:

root@jvr.at:/srv/ajaxterm# /etc/init.d/ajaxterm restart
root@jvr.at:/srv/ajaxterm# /etc/init.d/apache2 restart

And now check-out your ajaxterm (hint – use https to access your service)!

2014/06/12: An additional note – some versions of ajaxterm seems to have an issue runng in daemon mode, where you receive an connection loss error. Suprisingly if you start ajaxterm from the console as a simple process it works. So to fix this issue I modified the startupscript in my Ubuntu installation in /etc/init.d/ajaxterm as follows (thats the diff):

42,43c42,43
<                         start-stop-daemon -b --start --group=$AJAXTERM_GID --pidfile $PIDFILE --exec
$DAEMON -- --port=$PORT --serverport=$SERVERPORT \
<                                 --uid=$AJAXTERM_UID >/dev/null &&
---
>                         start-stop-daemon --start --group=$AJAXTERM_GID --pidfile $PIDFILE --exec
$DAEMON -- --daemon --port=$PORT --serverport=$SERVERPORT \


>                                 --uid=$AJAXTERM_UID >/dev/null