Friday, November 6, 2009

Threads suck a lot less

Well, I suck less at threading, but who would ever admit to sucking at anything? Anyway, after carefully reading, rereading, trying, testing and reading again both Well House Consultants threading course as this excellent PDF on Python Threading, I managed to get it working properly. I did break the extended timeout functionality, but thought it worth it to have threading. So making extended timeout available will be planned for version v0.06!

http://code.google.com/p/epingpy/

Monday, October 26, 2009

Threads suck!

Well, not really of course. But the thing is, my ePing.py release with threaded support is probably not really threaded.... It didn't work, so I just put up a revised version which does work, but I fear I broke the Threaded part. Well, first I have to work, but I will try to look into. Maybe I should first read about the basics...

Thursday, October 22, 2009

Net result from ePing.py

The company that creates and maintains our web applications apparently seemed to be able to "optimize"their code.

Last week my director asked me to mail an excerpt from the log to them, showing how many timeouts out sites have. They of course give no feedback to me, but the amount of e-mail notifications has been down drastically. And since yesterday, when I put the last version to work with a higher default-timeout of 10 seconds (instead of 5 seconds), I haven't received any notifications at all. Actually, as of now, nothing has been written to the log. So I guess their code optimizations, and the higher timeout value, made this possible.

I started the basic idea actually already in March this year, but started on a completely new code around July, and under a different name. And it seems that it has already paid of for my employer... Since it's a e-tailer, any downtime is possible missed orders...

Wednesday, October 21, 2009

Progress on ePing.py going slowly but surely

I just put the source of v0.04 online on Google Code. Main new thing in this version is that the UrlChecker class now runs threaded. This means that one run of ePing.py will not last longer if more URL's are to be checked (previously one check lasted more then 8 minutes when several of my employers website where unreachable).

Thursday, July 23, 2009

First release of ePing.py!

Today, not even 5 minutes ago, I released the first public version of my first Python project. I called it ePing.py, kind of being an 'enhanced' ping-tool written in Python.

The project can be found on code.google.com/p/epingpy.

It basically checks whether a specified URL returns a HTTP-error or gives an timeout within a specified period.

Please download, try and give me feedback. This would also be my first public attempt at OOP (without alot of oopz I hope... :-) )

Friday, May 29, 2009

Userscript to change links

Why I set out to create this very short script doesn't really matter. Sometimes it is just handy to change a link on a website, but you don't have control over the source code. In a broader perspective, this method actually changes a specified attribute for each specified HTML element, and in that perspective is actually more generally useful.

Well okay, this is the situation. Our CMS gives us the option to change the configuration options for systems we sell. However, for certain systems our Sales department asked us to only use options we have in stock, so delivery times can stay as short as possible. Of course this information is not displayed in the page itself. It is however returned from the database. We are not allowed to make changes to the source of our CMS, but what my colleague did, is copy the specific JSP file, and change the output as to show whether a product is a stockproduct or not. This new file we have to access manually for each component change. This is bothersome in my eyes, and that's why I set out to create a small userscript for IE7Pro.

On a side note, we use a lot of different userscripts for both Internet Explorer (through IE7Pro) and FireFox (through GreaseMonkey) to enhance efficiency or fix annoying inefficiencies (small but relevant difference).

So I basically needed to change every link on pageA.jsp that linked to pageB.jsp?value=someID to pageB2.jsp?value=someID.




// ==UserScript==
// @name Show external status
// @namespace qrazi.blogspot.com
// @description Replace an attribute of specified element
// @include http://pages.where.this.should.apply*
// @exclude http://pages.where.this.absolutely.shouldnt.apply(optional)
// @version 1.0
// ==/UserScript==

var elm = document.getElementsByTagName("a");//or any other kind of element
for(i = 0; i<elm.length;i++){
attr = elm[i].getAttribute("href");//or any other kind of attribute
if(check whether should be changed){
attr = attr.replace("to be replaced", "with this");
elm[i].setAttribute("href", attr);//change the href attribute to attr
}
}

Thursday, April 23, 2009

Unbelieveable

At work we use third party content for the products we sell. The match this content to the products, this third party, Icecat, uses XML index files. Our current setup is that a VBScript inserts the complete indexfile (~150MB) into our MS SQL database. Now everytime we put a new product in the CMS and put it online in our webshop (its my job as a content manager, I don't own the company), it queries for the location of the XML file with specs, so the CMS can use this XML to output specs with the product.

So you see, it's quiet vital to have this import working. However, since last Wednesday the import stopped working. According to the IT guys, there was no error, it just stopped. The company who made this specific VBScript and the CMS we use said they would look into it. Today, a week later, they finally got time to do this. What they found is that the indexfile got to big, and then they informed whether there was a daily or weekly indexfile (=smaller and therefore should work).

OMFG! First of all, there is a daily file, so they should have implemented a daily incremental import in the first place. Second, VBScript's default XML parser is DOM based. This means that the XML document will be extracted as a tree-structure into the memory, and for every read or write action, the amount of memory required increases. So before the import quit working, it used to take up to 10 hours to finish.

Look, I don't have a degree in programming, I didn't do any education remotely close to this. I am, by al means, a n00b when it comes to scripting and programming. But if I know all this, why the h*ll can several people who finished college in this field, who have their own company and ask a lot of money to even answer an e-mail to us not figure this out?

An argument given was that it would be too much work to rebuild the script (75 lines long) to use a SAX-kind of solution (which AFAIK isn't available in VBScript). The server in question also supports Asp.net 1.1. So XmlReader is available. Although not the same as SAX (pull instead of push), it is definitely comparable in performance, or faster. In fact, I use this solution to read the same indexfile and match it to our outstanding requests with Icecat in under 7 minutes.

* qrazi is about to get so frustrated for being underpaid, having a job a monkey could do, and the meeting people who pull this crap, but make three times what he makes, that he's thinking seriously about just plain quitting this job....

Monday, April 20, 2009

New videocard

A few months ago, I bought a GeForce 8600GT 256MB GDDR3 from my colleague. Although this type of card never really got recommendations because of the little performance increase over previous generations. However, I went from an integrated X1250 videocard to this GeForce 8600GT and I'm now a happy gamer again... :)