Thursday, October 27, 2011

Amazon Web Services

AWS: SDK for Java:
http://aws.amazon.com/sdkforjava/

books:
O'Reilly Programming Amazon EC2 by Jurg van Vliet and Flavia Paganelli

Programming Amazon Web Services by James Murty

Monday, October 24, 2011

Effective Java

was surprised to discover that there is a second edition of Effective Java
http://java.sun.com/docs/books/effective/

Tuesday, August 9, 2011

import mysql db dump from v. 4 to v 5.5

DDL of create table is changed, instead of TYPE ENGINE is used
one option is to change TYPE=MyISAM to ENGINE=MyISAM

wireshark - network protocol analyzer

http://www.wireshark.org/

Monday, August 8, 2011

50 things to know before migrating Oracle to MySQL

http://www.xaprb.com/blog/2009/03/13/50-things-to-know-before-migrating-oracle-to-mysql/

notes on mysql while reading ... (3)

High Performance MySQL

Indexes

...
We’re not just being picky: these two kinds of index accesses perform differently. The
range condition (i.e. BETWEEN, >, <)makes MySQL ignore any further columns in the index, but the multiple equality condition (i.e. IN) doesn’t have that limitation.


repairing

CHECK TABLE
REPAIR TABLE
myisamchk

Updating Index Statistics
API calls: records_in_range( ) and info( )
MySQL’s optimizer is cost-based
ANALYZE TABLE
Each storage engine implements index statistics differently
MyISAM stores statistics on disk, and ANALYZE TABLE performs a full index scan to compute cardinality. The entire table is locked during this process.
InnoDB does not store statistics on disk, but rather estimates them with random index dives the first time a table is opened. -> less accurate statistics, no blocking

You can examine the cardinality of your indexes with the SHOW INDEX FROM command -> Cardinality; or by using INFORMATION_SCHEMA.STATISTICS

Fragmentation - Row fragmentation & Intra-row fragmentation
MyISAM tables may suffer from both types of fragmentation, but InnoDB never fragments
short rows. To defragment data, you can either run OPTIMIZE TABLE or dump and reload the data.

hm they recommend using of summary table instead of counting of the records of the real table

MySQL’s ALTER TABLE performance can become a problem with very large tables. ALTER TABLE lets you modify columns with ALTER COLUMN, MODIFY COLUMN, and CHANGE COLUMN. All three do different things. MODIFY COLUMN always cause table rebuilds.

Building MyISAM Indexes Quickly - ALTER TABLE test.load_data DISABLE KEYS; -- load the data ALTER TABLE test.load_data ENABLE KEYS;
! Unfortunately, it doesn’t work for unique indexes, because DISABLE KEYS applies only to nonunique indexes

The InnoDB Storage Engine
Clustering by primary key: All InnoDB tables are clustered by the primary key, which you can use to your advantage in schema design.
No cached COUNT(*) value: Unlike MyISAM or Memory tables, InnoDB tables don’t store the number of rows in the table, which means COUNT(*) queries without a WHERE clause can’t be
optimized away and require full table or index scans.




telnet for windows vista / windows 7

http://www.leateds.com/2009/telnet-for-windows-vista-windows-7/

to enable telnet client on windows vista / windows 7
Control Panel -> Programs -> turn windows features on or off -> Telnet client

Friday, August 5, 2011

notes on mysql while reading ... (2)

High Performance MySQL

Finding Bottlenecks: Benchmarking and Profiling
A benchmark measures your system’s performance. In contrast, profiling helps you find where your application spends the most time or consumes the most resources.

*) http_load - windows port: http://www.orenosv.com/misc/
*) MySQL’s BENCHMARK( ) - SET @input := 'hello world'; SELECT BENCHMARK(1000000, MD5(@input)); (an exec var to avoid cache hits)
*) SysBench -

Thursday, August 4, 2011

notes on mysql while reading ...

High Performance MySQL

(hm there is a risk already to copy and paste the whole book)

Storage engine - the 3rd level of MySQL architecture is responsible for... among others:
- storage engines can implement there policies and lock granularities (although alter table issues a table lock regardless of storage engine)
- row locks are implemented by storage engines e.g. by InnoDB, Falcon

(
REPEATABLE READ is MySQL’s default transaction isolation level
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
InnoDB supports all four ANSI standard isolation levels

)

- InnoDB and Falcon solve phantom reads with multi-version concurrency control (MVCC)
- the underlying storage engines implement transactions: MySQL AB provides three transactional storage engines: InnoDB, NDB Cluster, and
Falcon. Several third-party engines are also available; the best-known engines right
now are solidDB and PBXT.

(
MySQL operates in AUTOCOMMIT mode by default -> SHOW VARIABLES LIKE 'AUTOCOMMIT'; SET AUTOCOMMIT = 1;
Changing the value of AUTOCOMMIT
has no effect on nontransactional tables, such as MyISAM or Memory tables
commit is forced by DDL, LOCK
MySQL will not usually warn you or raise errors if you do transactional operations on a nontransactional table
InnoDB uses a two-phase locking protocol; SELECT ... LOCK IN SHARE MODE ; SELECT ... FOR UPDATE; MySQL server! (not storage engine) also supports the LOCK TABLES and UNLOCK TABLES commands;
actually, InnoDB and others use row-locking mechanism with MVCC, InnoDB implements MVCC by storing with each row two additional, hidden values that record when the row was created and when it was expired (or deleted); the row stores the system version number at the time each event occurred. This is a number that increments each time a transaction begins. Each transaction keeps its own record of the current system version, as of the time it began
)

MySql storage engines:
SHOW TABLE STATUS LIKE 'user' \G
MyISAM tables created in MySQL 5.0 with variable-length rows are configured by default to handle 256 TB of data, using 6-byte pointers to the data records:
CREATE TABLE mytable (
a INTEGER NOT NULL PRIMARY KEY,
b CHAR(18) NOT NULL
) MAX_ROWS = 1000000000 AVG_ROW_LENGTH = 32;
ALTER TABLE mytable ENGINE = Falcon;

mysql

yeah, time to learn it

resources:

book:
http://www.highperfmysql.com/

web sites:
http://www.mysqlperformanceblog.com/

Wednesday, July 13, 2011

Google Guice

Dependency Injection
(Google Plus technological details:
http://www.infoq.com/news/2011/07/Google-Plus)

Tuesday, July 12, 2011

Jubula

functional testing
nice advertisement but still not able to use it - terrible documentation

Wednesday, June 29, 2011

Friday, June 24, 2011

NLS_LANG env variable

facing the following 'problem' while migrating data from postgresql db to Oracle on a French installation - the data is exported in UTF8 while Oracle expects win1252 -> in order to import the data correctly - env variable NLS_LANG is used - (set NLS_LANG=FRENCH_FRANCE.UTF8)

Tuesday, June 21, 2011

Groovy & Grails

YAJWF

books:
Apress: A Definitive Guide to Grails, 2nd edition
Apress: Groovy and Grails Recipes
Manning: Grails in Action

some notes while reading:

. servletContext - objects placed within it will not be garbage-collected unless the
application explicitly removes them; access to the servletContext object is not synchronized
. return statements are optional in Groovy

Thursday, April 7, 2011

Saturday, March 19, 2011

WaveMaker

VMware has squired WaveMaker -> I've starting looking at WaveMaker after reading this title .. so far it looks interesting

Tuesday, February 15, 2011

spring related links that might be helpful

Spring – How to do dependency injection in your session listener

spring+wicket + quartz continues 2...

adding job execution triggers

//job listener implementation
public class Epb5JobExecutionListener extends org.quartz.listeners.JobListenerSupport {

public String getName() {
return "Epb5JobExecutionListener";
}

public void jobWasExecuted(
JobExecutionContext context,
JobExecutionException jobException
) {
//real implementation comes here
}
}

for example, if the job was successfully executed - jobException is null
and context.getJobDetail().getName() and context.getJobDetail().getJobClass() gives info which job was actually executed.

//in spring applicationContext.xml

<!-- scheduler -->
<bean id="scheduler" class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="triggers">
<list>
<ref bean="export2StatisticsTriggerBean" />
<ref bean="ldapUsersUpdateTriggerBean" />
</list>
</property>
<property name="globalJobListeners">
<list>
<bean class="at.kds.epb5.bl.Epb5JobExecutionListener"/>
</list>
</property>
</bean>

to sum up, the overall porting of exports implementation took me 3-5 more time than the initial implementation mostly because of the luck of detailed docs

Friday, February 11, 2011

spring+wicket + quartz continues...

the other scheduled job I need should update users from LDAP to the database

here are all bean definitions in applicationContext.xml

<!-- quartz scheduler related -->

<!-- jobs definitions -->

<!-- export to Austrian statistics -->
<bean id="export2StatisticsJob" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean">
<property name="targetObject"><ref bean="scheduledexportservice"/></property>
<property name="targetMethod"><value>executeScheduledExportsIfAny</value></property>
<property name="concurrent" value="false"/>
</bean>

<bean id="ldapProperties" class="at.kds.epb5.common.LdapProperties" >
<property name="providerUrl"><value>ldap://here comes ip address/</value></property>
<property name="securityAuthentication"><value>simple</value></property>
<property name="securityPrincipal"><value>here comes user</value></property>
<property name="securityCredentials"><value>here comes pwd</value></property>
<property name="root"><value>dc=oeamtc,dc=at</value></property>
<property name="filter"><value>(&amp;(objectclass=person) (| (sAMAccountName=a*) (sAMAccountName=t*)))</value></property>
<property name="userId"><value>sAMAccountName</value></property>
<property name="userFirstName"><value>givenName</value></property>
<property name="userLastName"><value>sn</value></property>
<property name="userValidTo"><value>accountExpires</value></property>
<property name="userPrefix"><value>a</value></property>
<property name="userClubCardNumber"><value>extensionAttribute13</value></property>
</bean>

<bean name="ldapUsersUpdateJob" class="org.springframework.scheduling.quartz.JobDetailBean">
<property name="jobClass" value="at.kds.epb5.bl.EpbUsersUpdateJob" />
<property name="jobDataAsMap">
<map>
<entry key="ldapProperties" value-ref="ldapProperties"/>
<entry key="userservice" value-ref="userservice"/>
</map>
</property>
</bean>

<!-- cron trigger definitions -->
<bean id="export2StatisticsTriggerBean" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="export2StatisticsJob" />

<!-- run at 01 AM every day -->
<property name="cronExpression" value="0 0 01 * * ?" />
</bean>

<bean id="ldapUsersUpdateTriggerBean" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="ldapUsersUpdateJob" />

<!-- run at 02 AM every day -->
<property name="cronExpression" value="0 0 02 * * ?" />
</bean>


<!-- scheduler -->
<bean id="scheduler" class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="triggers">
<list>
<ref bean="export2StatisticsTriggerBean" />
<ref bean="ldapUsersUpdateTriggerBean" />
</list>
</property>
</bean>

where

public class EpbUsersUpdateJob extends QuartzJobBean {

@SpringBean
private UserService userservice;

LdapProperties ldapProperties;

@Override
protected void executeInternal(JobExecutionContext context)
throws JobExecutionException {
//implementation not shown
}


//
}

Thursday, February 10, 2011

spring+wicket + quartz

continuing with the porting of an old web app (written with struts1) to wicket+spring
where quartz was used for scheduling of exports & users update from LDAP to the database

my initial idea was to implement org.quartz.Job interface using a 'service' that does the actual export; these all ended to the service reference remain not initialized; all attempts to 'inject the dependency' failed.

final solution - using spring-context-support
(maven:
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context-support</artifactId>
<version>${spring.version}</version>
</dependency>
)

then in applicationContext.xml:

<!-- quartz scheduler related -->

<!-- jobs definitions -->
<bean id="export2StatisticsJob" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean">
<property name="targetObject"><ref bean="scheduledexportservice"/></property>
<property name="targetMethod"><value>executeScheduledExportsIfAny</value></property>
</bean>


<!-- cron trigger definitions -->
<bean id="export2StatisticsTriggerBean" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="export2StatisticsJob" />
<!-- run every day 5 minutes -->
<property name="cronExpression" value="0 0/5 * * * ?" />
</bean>

<!-- scheduler -->
<bean id="scheduler" class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="triggers">
<list>
<ref bean="export2StatisticsTriggerBean" />
</list>
</property>
</bean>


the key here is passing service bean scheduledexportservice as a value of targetObject property in export2StatisticsJob bean definition

Monday, February 7, 2011

wicket + form validation

when extending AbstractFormValidator
use FormComponent.getInput() / FormComponent.getConvertedInput() to validate the entered value - it is not yet saved to the component data mode, that's why validation of data model is not working

notes on Spring+JPA+Hibernate

having a mapping entity class & using a shared EntityManager
(
in "service" implementation
@PersistenceContext
private EntityManager entityManager;
)

1) "insert" query
@Transactional
public void scheduleExport(ScheduledExport export) {
if ( export == null ) return;
entityManager.persist(export);
}

2) annotations for ID generated with a sequence:
mapping entity class annotation
@Entity
@Table(name = "SCHEDULED_EXPORTS")
@SequenceGenerator(sequenceName="SEQ_SCHEDULED_EXPORTS",name="SEQ_SCHEDULED_EXPORTS")
public class ScheduledExport implements Serializable {
...

@Id
@Column(name="ID")
@GeneratedValue(strategy=GenerationType.SEQUENCE, generator="SEQ_SCHEDULED_EXPORTS")
private Integer id;

3) "delete" query
@Transactional
public void deleteExport(ScheduledExport export) {
if ( export == null ) return;
export = entityManager.merge(export);
entityManager.remove(export);
}
first call merge() and then remove() - otherwise getting java.lang.IllegalArgumentException: Removing a detached instance

4) when access to the data source is needed:
(again in service implementation)
@Autowired
DataSource dataSource;

Friday, January 28, 2011

Oracle XML Developers Kit

we do use Oracle XML Developers Kit extensively for reports & exports generation - OracleXMLQuery + jstl-fo transformation with apache fop.

on oracle site I see an advertisement that Oracle XML Developer Kit 11g Release 2 is available. but for download only v 10.1.0.3.0 (from 8/31/2004 !) is available...

Even when no Oracle implementation of xml parsers is used, xmlparserv2.jar is still referenced from xsu12.jar (XMLParseException is referenced...)

which leads to my next problem
- for the current project I am using wicket + spring, and spring's application context file fails to be parsed with Oracle XML parsers when web app context is loaded...

discover apache pivot

while looking for a possibility for pivot with hibernate found apache pivot in search result. it is not what I was looking for but it definitely looks impressive

Thursday, January 13, 2011

wicket cool

wicket cool

while preparing a project skeleton / start up for wicket + spring + hibernate I discovered wicket cool

cons
- the limitation for points in package name
- not the latest versions of spring, hibernate, wicket, etc. are used
- web app tests use enhancedwickettester - the imports have to be adjusted:
import pl.rabbitsoftware.EnhancedWicketTester;
- I am not particular fan of the splitting of the project of separate projects for domain, service, webapp especially when there is only 1 developer responsible for all business logic, presentation, etc. ; and this DAO pattern - I definitely hate it

other startup project
by jWeekend

wicket + tests

writing tests for wicket blog

Test Driven Development with Wicket and Spring