Thursday, August 25, 2011
Sunday, August 7, 2011
One of my favourite dialogs from Yes, Prime Minister !
Sir Humphrey: "With Trident we could obliterate the whole of Eastern Europe."
Jim Hacker: "I don't want to obliterate the whole of Eastern Europe."
Sir Humphrey: "It's a deterrent."
Jim Hacker: "It's a bluff. I probably wouldn't use it."
Sir Humphrey: "Yes, but they don't know that you probably wouldn't."
Jim Hacker: "They probably do."
Sir Humphrey: "Yes, they probably know that you probably wouldn't. But they can't certainly know."
Jim Hacker: "They probably certainly know that I probably wouldn't."
Sir Humphrey: "Yes, but even though they probably certainly know that you probably wouldn't, they don't certainly know that, although you probably wouldn't, there is no probability that you certainly would."
Jim Hacker: "I don't want to obliterate the whole of Eastern Europe."
Sir Humphrey: "It's a deterrent."
Jim Hacker: "It's a bluff. I probably wouldn't use it."
Sir Humphrey: "Yes, but they don't know that you probably wouldn't."
Jim Hacker: "They probably do."
Sir Humphrey: "Yes, they probably know that you probably wouldn't. But they can't certainly know."
Jim Hacker: "They probably certainly know that I probably wouldn't."
Sir Humphrey: "Yes, but even though they probably certainly know that you probably wouldn't, they don't certainly know that, although you probably wouldn't, there is no probability that you certainly would."
Tuesday, March 29, 2011
Streamlining Weblogic Portal Workshop IDE 10
My experience with the Weblogic Portal Workshop IDE has so far been positively suicidal !! I wish no-one has to go through this experience .. But however the dark the cloud might be, there is always a silver lining .. My silver lining has been the performance shortcuts that I had to learn to make the hard way to prevent the aforementioned suicide :)
I have an Lenovo T400 office laptop with a 2.0 Ghz Centrino vPro 2 processor and 3 Gigs of RAM. Normally this would have been an ideal work laptop, but for the project I'm working on .. this stinks !
Being the nosey character I am (and in no small measure, to prevent the aforementioned death), naturally I stared to tune the IDE and the project..
(1) Tuning the hardware
What ever the software, its the hardware that has to run it. You cannot go beyond the limitations of the hardware. So naturally, the hardware has to grow to present you with the opportunity to improve the performance. Mine was a dinky little Lenovo T60 machine with 2 Gig of RAM and a 5400rpm HDD compounded by the months of fragmented files.
An upgrade to a T400 + 3 Gig of RAM and a new HDD image extended my life by at least 10 years :)
Get rid of the extra bulk of unnecessary installed programs and files .. RAR/ZIP anything you dont need for your immediate usage.
On a different note, I'm not totally sure why it happened or why it is behaving so, but my javaw process becomes unpredictable and kind of hangs every 2-3 hours lately. Also the Disk I/O reads for the javaw process frequently cross 100 million within an hour or so. My task manager depicts that of the two cores, one CPU core is busy while the other is not being used at all. Maybe this is due to a thread lock somewhere in the IDE. Looking at the Eclipse logs, it looks like the threads that control the UI facets of eclipse are running into errors and locking out the other threads. Being too lazy to figure out and resolve the actual issue, on a hunch, I set the affinity of the javaw process to only a single core. This seems to have resolved the issue. The IDE is slightly slower, but the DISK I/O reads have come down considerably and the IDE no longer hangs or crashes.
(2) Tuning the OS
Whatever the raw power of your hardware, the OS gets a chunk for itself. Unless the Google Native Interface for Java takes off (running java without an OS), you will have to take care to provide as little as possible for that chunk. Standard OS optimizations include
(3) Tuning the Middleware / IDE
Before starting the workshop
After staring the workshop
Before importing the projects / creating new projects, do the following :
After importing the projects
Anytime the workspace crashes, restarting the eclipse IDE might take a long time for the workspace to get built. This is because of the standard Java Tooling plugin that eclipse run to rebuild the workspace.
It is far easier to delete the .metadata folder in the Eclipse working directory and create new projects. This has been an accepted approach for many a portal developer.
Taking it a step further, you dont actually need to delete the entire .metadata folder. Delete just the following:
This will delete just the project information and keep all your Eclipse settings and Facets intact. This will speed up your workspace build time considerably. (My workspace rebuild time decreased by more than 50 %).
I have an Lenovo T400 office laptop with a 2.0 Ghz Centrino vPro 2 processor and 3 Gigs of RAM. Normally this would have been an ideal work laptop, but for the project I'm working on .. this stinks !
Being the nosey character I am (and in no small measure, to prevent the aforementioned death), naturally I stared to tune the IDE and the project..
- Tuning the Hardware
- Tuning the OS
- Tuning the Middleware
(1) Tuning the hardware
What ever the software, its the hardware that has to run it. You cannot go beyond the limitations of the hardware. So naturally, the hardware has to grow to present you with the opportunity to improve the performance. Mine was a dinky little Lenovo T60 machine with 2 Gig of RAM and a 5400rpm HDD compounded by the months of fragmented files.
An upgrade to a T400 + 3 Gig of RAM and a new HDD image extended my life by at least 10 years :)
Get rid of the extra bulk of unnecessary installed programs and files .. RAR/ZIP anything you dont need for your immediate usage.
On a different note, I'm not totally sure why it happened or why it is behaving so, but my javaw process becomes unpredictable and kind of hangs every 2-3 hours lately. Also the Disk I/O reads for the javaw process frequently cross 100 million within an hour or so. My task manager depicts that of the two cores, one CPU core is busy while the other is not being used at all. Maybe this is due to a thread lock somewhere in the IDE. Looking at the Eclipse logs, it looks like the threads that control the UI facets of eclipse are running into errors and locking out the other threads. Being too lazy to figure out and resolve the actual issue, on a hunch, I set the affinity of the javaw process to only a single core. This seems to have resolved the issue. The IDE is slightly slower, but the DISK I/O reads have come down considerably and the IDE no longer hangs or crashes.
(2) Tuning the OS
Whatever the raw power of your hardware, the OS gets a chunk for itself. Unless the Google Native Interface for Java takes off (running java without an OS), you will have to take care to provide as little as possible for that chunk. Standard OS optimizations include
- Setting the size of your Page file to a User defined size of 1.5 times the amount of RAM
- Making sure the antivirus scans are not running (If possible, exclude the JAR, EAR, WAR files from the A/V scans .. hard to do in a office setup)
- Try "ending-task" from Task Manager for a number of standard unnecessary background software. This might vary depending on the setup you have on your laptop.. Bottom line - > Portal IDE is huge, clunky and very demanding on your system. Be afraid of it and try to serve it well :)
(3) Tuning the Middleware / IDE
Before starting the workshop
- Play around with the JVM memory arguments in the eclipse.ini file to increase the memory available to the eclipse. I usually use [-vmargs -Xms768m -Xmx768m -XX:MaxPermSize=256m -XX:PermSize=256m -XX:NewRatio=3 -Xss256k ]
- Add -XX:+UseParallelGC to the above memory arguments for parallel Garbage Collection.
After staring the workshop
Before importing the projects / creating new projects, do the following :
- General > Appearance (Uncheck Enable Animations)
- General > Appearance > Label Decorations (Uncheck EVERYTHING !)
- General > Startup and Shutdown (Uncheck auto updaters and feedback/usage reporting plugins)
- General > Welcome (Uncheck all root pages)
- General > Workspace (Uncheck Build Automatically)
- Install / Update > Automatic Updates (Uncheck automatic updates)
- Run / Debug > Console (Check Limit console Output)
- Server (Uncheck Automatically publish to local and remote servers)
- Server (Server Timeout delay should be set to Long for big projects)
- Server > Audio (Uncheck Enable sounds)
- Server > Launching (Uncheck Automatic Publishing and Automatic Restarting)
- Validation > Audio (Uncheck Allow projects to override the preferences, Check Suspend all validators)
- Server > Audio (Uncheck Enable sounds)
- Also make sure you have all unnecessary windows, views and perspectives closed
After importing the projects
Anytime the workspace crashes, restarting the eclipse IDE might take a long time for the workspace to get built. This is because of the standard Java Tooling plugin that eclipse run to rebuild the workspace.
It is far easier to delete the .metadata folder in the Eclipse working directory and create new projects. This has been an accepted approach for many a portal developer.
Taking it a step further, you dont actually need to delete the entire .metadata folder. Delete just the following:
- .metadata/.lock file
- .metadata/.plugins/org.eclipse.core.resources\*
This will delete just the project information and keep all your Eclipse settings and Facets intact. This will speed up your workspace build time considerably. (My workspace rebuild time decreased by more than 50 %).
Monday, March 28, 2011
ESX Server tuning – quick tour
Copied from Java Tuning
Posted by Gili Nachum
Our VMWare ESX server does us a great job.
Running on an IBM X3650 HW, with 24GB RAM and 2×4 cores, it can simultaneously run up to 25 virtual machines, each VM is configured with around ~1.5 GB of RAM.
Running on an IBM X3650 HW, with 24GB RAM and 2×4 cores, it can simultaneously run up to 25 virtual machines, each VM is configured with around ~1.5 GB of RAM.
After reaching the 25 running VMs mark, we started noticing increasing sluggishness when additional VMs were turned on.
Of course, we did the trivial stuff of making sure that all screen savers are disabled, antivirus agents are not correlated to run at the same point in time, and making sure that all of the VMs are running the latest VMWare tools agent.
It was time to dig in deeper to find out where is the bottleneck we came across.
It was time to dig in deeper to find out where is the bottleneck we came across.
Someone told me that the stats that the reliability of the performance indicators that the graphic VI console shows is questionable and it’s recommended using the terminal utilities.So, I SHHed to the service console VM and ran the top utility. Immediately, I understood that what I’m actually doing is surveying the service console VM processes, rather than the overall ESX hypervisor activity. A quick dig up made me realize that the hypervisor is visible through the esxtop command, which is also executed from within the service console VM.
even for those of you that knows your way through the output of top and linux’s sysstat package, the data shown by esxtop is rather cryptic.
This great esxtop tutorial did me a great service with understanding the esxtop output.
This great esxtop tutorial did me a great service with understanding the esxtop output.
I started more than 30 machines to reproduce the problem, and quickly went through the list of usual suspects: CPU, memory and IO:
- CPU
I’ve verified that it’s not a CPU problem since the “CPU load average” was around 0.2. and PCPU was much the same. - Memory
Then I’ve switched to the memory display and verified that it’s not a physical memory issue. I saw the “high state” marker which was a good sign + there were almost 17GB ursvd (unreserved memory) in the VMKMEM/MB line.
SWAP (~3GB) seemed OK.
VMWare’s ballooning and memory sharing does miracles in broad day light. - I/O
I didn’t see any queues forming. read/write rates seemed pretty low.
So, the 25 VMs performance limit will remain a mystery until I’ll have proper time to analyze it more throughly, or even better, I’ll find someone from IT to do that for me.
When sendsignal and Ctrl+break fail to generate Java thread dump in Windows
Copied from JAVA TECH SHARING
Thanks to GUY MOSHKOWICH
I handled a customer incident where it was looked like the application was hang. As our application is running as NT service - I suggested to use sendsignal in order to generate threads dump file. this worked fine in my local environment but on the customer environment the file was not generated. I then send a batch file that launch the application as console application instead than NT service and suggested to press Ctrl+Break in the application window in order to generate threads dump file.
This also failed.
The customer was using Remote Desktop application. I suspect that the failure is related to the fact that the "Ctrl+Break" was done from remote so I suggested to do it on the server's keyboard itself but this also failed.
Next, I thought to generate the threads dump from inside the application. I googled on this but did not find away. I consult a colleague of mine on this and he told me oncom.ibm.jvm.Dump.JavaDump() see "IBM JVM 6.0 diagnostics guide".
Before implementing this into our application, I went to consult my team lead on this idea - she thought there must be a more standard way to generate threads dump. She did not want to add new functionality (dump generation) that was not necessary to our application activity. She asked me to try and find if such standard way exist.
She was right - there is such way and it worked for the customer.
I found it in the diagnostics guide above and it involves the use of three steps:
By the way, jdmpview has rich functionality and you can read on it in the diagnostics guide.
This also failed.
The customer was using Remote Desktop application. I suspect that the failure is related to the fact that the "Ctrl+Break" was done from remote so I suggested to do it on the server's keyboard itself but this also failed.
Next, I thought to generate the threads dump from inside the application. I googled on this but did not find away. I consult a colleague of mine on this and he told me oncom.ibm.jvm.Dump.JavaDump() see "IBM JVM 6.0 diagnostics guide".
Before implementing this into our application, I went to consult my team lead on this idea - she thought there must be a more standard way to generate threads dump. She did not want to add new functionality (dump generation) that was not necessary to our application activity. She asked me to try and find if such standard way exist.
She was right - there is such way and it worked for the customer.
I found it in the diagnostics guide above and it involves the use of three steps:
- generating system dump
- extracting JVM info from the system dump
- viewing the thread's stacks and monitors
- install "userdump" from Microsoft site:http://www.microsoft.com/downloads/en/confirmation.aspx?familyId=e089ca41-6a87-40c8-bf69-28ac08570b7e&displayLang=en
- run "userdump -p" . it will list all running processes and their process id in the right column
- find the application process id
- run "userdump pid
process_name" . this should generate " .dmp" file in "C:\kktools\userdump8.1\x86" folder (in case you are using x86 platform)
How to extract the JVM info from the system dump
- cd to C:\kktools\userdump8.1\x86
- run "jextract process_name .dmp". this should generate "process_name.
dmp.zip" and " process_name.dmp.xml"
How to view the thread's stacks and monitors
- copy the "process_name.dmp.zip" to a local folder
- open CMD.exe
- cd to the local folder
- run "jdmpview -zip process_name
.dmp.zip" - execute "set logging file file_name
" - execute "set logging on"
- execute "info thread *"
- open the log file in editor and investigate
By the way, jdmpview has rich functionality and you can read on it in the diagnostics guide.
Subscribe to:
Posts (Atom)