Sunday, June 24, 2007

Repliqa and web discovery

I’ve spent a good chunk of the weekend getting caught up on Mark Seremet’s Repliqa blog. There’s a lot of cool stuff there, and I can’t wait to get in on the Repliqa alpha (assuming it’s not vapor, and I don’t think it is).

One thing that keeps coming up in his blog (usually between the lines), and something with which I am becoming fascinated, is the
serendipity of the internet. I’m not sure that’s really the right word, but it captures some of the sense I have of building a network as I traverse (or not) the links that come into range of my mouse as I browse the web. Some of the links are indeed serendipitous, taking me to something in which I have an interest without me necessarily intending to go there. Others take me someplace completely unexpected (like the Repliqa blog), sometimes piquing my interest other times not.

Part of what Repliqa is is a discovery engine, helping do well on a broader scale what the recommendations feature on Amazon (for example) does not so well today (yes, I'm SURE i'm oversimplifying). What’s great about this is that it will get to know what I am interested in and help me ‘discover’ more things that might fit that profile – news, books, music, blogs, etc. What’s missing is the stuff that it DOESN’T know about me. I’m a fan of the random and of things that strike me (like Mark’s blog) when I’m in the right mood for them. Sometimes what I want is something that I can’t define or that I don’t even know I want.

Hey Mark, can Repliqa do that for me?

One other thing I'd love to have, and maybe it's out there and I don't know it, is a way to backtrack those links I discussed above. I can't remember now how I found the Repliqa blog, or many other cool things that I've run into, but it would be great if del.icio.us or other such sites could pull the hyperlink trail that took me to the site I just added. Sometimes I need more than the destination to remind me where I've been.

Monday, June 11, 2007

WMI Collection or Instance?

I was working with WMI in PowerShell today trying to get some disk data from a cluster, and I ran into an odd behavior that had me stumped for a while. Here’s a simplified version of the problem (this is not what I was doing with the cluster, but the WMI behavior is identical). I got a WMI object with the intent of using an associator class to get related information. In this example, I get a Win32_LogicalDisk instance and try to relate it to Win32_DiskPartition. I used the following commands:

PS C:\> $z=gwmi win32_logicaldisk -filter "DeviceID='C:'"
PS C:\> $y=$z.psbase.GetRelated("Win32_DiskPartition")
PS C:\> $y.name
PS C:\>
Notice that $y.name was NULL. But if I looked at the whole value, there was one instance:
PS C:\> $y

NumberOfBlocks : 75504448
BootPartition : True
Name : Disk #0, Partition #0
PrimaryPartition : True
Size : 38658277376
Index : 0
What the heck? Why couldn’t I get the value of a property? Well, it turns out that even though GetRelated() returned only one instance, what came back was actually a collection:
PS C:\> $y.gettype().FullName
System.Management.ManagementObjectCollection
What made it work was forcing $y to be a scalar. By wrapping the statement in $(), I was able to get this as a single instance as opposed to a one-instance collection:
PS C:> $($y).name
Disk #0, Partition #0
Alternatively, this can be accomplished at assignment with the following:
PS C:\> $y=$($z.psbase.GetRelated("Win32_DiskPartition"))
PS C:\> $y.name
Disk #0, Partition #0
The above applies in general to the GetRelated() method. Regardless of the number of instances returned (even zero), the object returned is always a System.Management.ManagementObjectCollection and must always be treated as a collection.

Tuesday, May 29, 2007

Hot Dog Was NOT on the Menu

Notice: The event described below was unintentional, and any humor I or my family found in the situation was because there was no injury or harm to anyone, animal or human.

I normally don't get much excitement, particularly of the humorous kind, but yesterday was an exception. I was grilling some steaks and burgers for the family. One of my dogs (Csaba) was hanging around the grill trying to get the drippings that made it through the coals and out the vent holes in the bottom of the kettle. I try to keep him away from the grill when it's in use, but he is pretty persistent, and I occasionally have to step away. My biggest concern has been that he might tip the grill over, as he's a pretty stocky dog for a Shiba Inu, and he's a bit of a klutz.

Anyway, I stepped in the house for a moment to grab something forgotten on the counter, leaving Csaba to his own devices for about 30 seconds. When I went back outside, he was on fire! Yes, on FIRE! From the base of his skull to the middle of his back, and down onto his left shoulder. As I didn't have water immediately at hand, and honestly wasn't thinking exceptionally clearly, I tried to catch him to put out the fire by smothering it. It turned out to be a good idea, as he ran and the breeze put the flames out.

A quick inspection showed that he was little worse for the experience, thank goodness. And I'm also thankful he didn't come in the house like that. Wow, that could have been bad.

Shiba Inu's have nice thick coats, and I suspect he never actually knew that there was a problem. He'd been under the grill enough in the past to know that his back would get warm and that it wasn't a problem, so I suspect he thought this was just the same thing. You can tell something happened to his fur as there are some funny looking patches over his left shoulder, but it would be tough to tell exactly what the problem was.

At the time, I was pretty freaked out, but I can see the humor now given that he was uninjured. And I have a new project to complete before my next session on the grill - a cage to keep the dog out from under it. Otherwise, next time we really might have a literal hot dog.

Friday, May 25, 2007

WTS Roaming Profile Slow to Load

This week, I had what has to be one of the most patient customers I’ve ever worked with. In working another issue, he reported to me that his Windows Terminal Server (WTS) logon took almost 30 minutes (!) to complete, and that he had been living with this for more than a year (!!). He had apparently been told by other support personnel that they couldn’t figure out the cause, and that he would have to live with it. Holy cow!

Well, I was amazed (to say the least), and I was eager to track down the cause. First, though, some detail on the WTS system. The server was running Windows Server 2003 Enterprise, SP1 and Citrix MetaFrame XP v1.0, although I strongly suspect that this would have been an issue without MetaFrame, and probably in other versions of Windows.

My first suspicion was that his profile was too large, or that he had some large files that changed a lot. Unfortunately, neither was the case. In fact, his profile was 32MB in size, and there were only a few files over 1MB in size. That clearly wasn’t the problem.

I decided to try copying his whole roaming profile to a server just to see how long it took. After all, this could be a network performance issue, right? WOW! It took more than 20 minutes to copy the whole thing. Why? Well, it turned out that he had more than 20000 files in his %USERPROFILE%\Cookies folder, and that took a while to copy. But still, that was less time than he was reporting.

I then remembered another customer with a profile load issue that seemed kinda weird at the time. Here’s the relevant 1509 Event description:

Windows cannot copy file \\server\share\wtsprofiles\username\Local Settings\Temp\5\Temporary Internet Files\Content.IE5\0ZDCVVFL\dickey;dir=newsweek[1].intl;dir=dickey;dcopt=ist;heavy=n;poe=yes;kw=dickey;pos=ad8;ad=125x125;fromrss=n;rss=n;sz=125x125;nwid=18519993_newsweek;ord=545299263383843130 to location C:\Documents and Settings\username\Local Settings\Temp\5\Temporary Internet Files\Content.IE5\0ZDCVVFL\dickey;dir=newsweek[1].intl;dir=dickey;dcopt=ist;heavy=n;poe=yes;kw=dickey;pos=ad8;ad=125x125;fromrss=n;rss=n;sz=125x125;nwid=18519993_newsweek;ord=545299263383843130. Possible causes of this error include network problems or insufficient security rights. If this problem persists, contact your network administrator.

DETAIL - The filename or extension is too long.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

For those of you unfamiliar with roaming profiles, this is a REALLY weird error. By default, the %USERPROFILE%\Temp folder doesn’t roam, and there is no way to make it roam. In fact, the file didn’t actually exist in the roaming profile store for this user. So why did we even get this error? It appears that the process that scans the TARGET folder structure for changes examines the entire structure, regardless of the exclusions. The exclusion is applied when actually copying the profile data. I may be off here, but that’s what it looks like.

I checked this user’s local Temp folder, and the number of files in the Temporary Internet Files was also quite large (I can’t recall the number). Based on that and the size of the Cookies folder, I think that the issue wasn’t one of copying, but rather was one of comparing. The sheer number of files to be checked was large enough that it was taking a REALLY long time to scan them all.

I had the user clean up his cookies and the Temporary Internet Files, and then asked him to try logging on again. He told me that after this change, his login took less than a minute to complete. What a change!

People often talk about slow profile loads being caused by large files in the roaming profile, and that’s a valid concern. However, the cause may well be the number of files in the profile, not the amount of data.

PowerShell Get-ChildItem Traversal Order

I was playing with Get-ChildItem in PowerShell the other day, and I came across an interesting behavior with the –RECURSE switch. As a review, the –RECURSE switch tells Get-ChildItem to get all of the items at the specified path and in all child items within that path. So, for example,

PS> Get-ChildItem C:\Windows –RECURSE

would return all of the items in C:\Windows, and all of the items in all folders under C:\Windows recursively. This is similar to the Command Shell DIR /S command.

What’s interesting is that there is a difference in the recursive traversal order depending on whether the path is wildcarded or not. That is, the paths C:\Foo and C:\Foo\* return slightly different results when passed to Get-ChildItem –RECURSE. This is a change from DIR /S, where either path returns the same results.

So how do they differ? Let’s take a look at the command line results and see what we can see. First, a folder path without wildcards:


PS> dir foo –recurse


Directory: Microsoft.PowerShell.Core\FileSystem::C:\temp\foo


Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 5/23/2007 1:52 PM <DIR> Bar
d---- 5/23/2007 1:55 PM <DIR> Baz
-a--- 5/23/2007 1:52 PM 26 readme.txt
-a--- 5/23/2007 1:51 PM 9 testfile.txt


Directory: Microsoft.PowerShell.Core\FileSystem::C:\temp\foo\Bar


Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 5/23/2007 1:53 PM <DIR> Beer
d---- 5/23/2007 1:54 PM <DIR> Wine
-a--- 5/23/2007 1:52 PM 31 Menu.txt


Directory: Microsoft.PowerShell.Core\FileSystem::C:\temp\foo\Bar\Beer


Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 5/23/2007 1:52 PM 15 Allagash.txt
-a--- 5/23/2007 1:52 PM 15 Belgian.txt
-a--- 5/23/2007 1:52 PM 15 DogFish Head.txt
-a--- 5/23/2007 1:53 PM 22 Swill.txt


Directory: Microsoft.PowerShell.Core\FileSystem::C:\temp\foo\Bar\Wine


Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 5/23/2007 1:54 PM 29 Australia.txt
-a--- 5/23/2007 1:54 PM 29 California.txt
-a--- 5/23/2007 1:54 PM 29 France.txt
-a--- 5/23/2007 1:54 PM 29 Hungary.txt
-a--- 5/23/2007 1:54 PM 29 Iceland.txt


Directory: Microsoft.PowerShell.Core\FileSystem::C:\temp\foo\Baz


Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 5/23/2007 1:52 PM 31 Menu.txt

Notice that the first items returned are the items in the folder foo, followed by a recursive traversal of the sub-folders in foo.
Now let’s try that again, wildcarding the path:

PS> dir foo\* -recurse


Directory: Microsoft.PowerShell.Core\FileSystem::C:\temp\foo\Bar


Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 5/23/2007 1:53 PM <DIR> Beer
d---- 5/23/2007 1:54 PM <DIR> Wine
-a--- 5/23/2007 1:52 PM 31 Menu.txt


Directory: Microsoft.PowerShell.Core\FileSystem::C:\temp\foo\Bar\Beer


Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 5/23/2007 1:52 PM 15 Allagash.txt
-a--- 5/23/2007 1:52 PM 15 Belgian.txt
-a--- 5/23/2007 1:52 PM 15 DogFish Head.txt
-a--- 5/23/2007 1:53 PM 22 Swill.txt


Directory: Microsoft.PowerShell.Core\FileSystem::C:\temp\foo\Bar\Wine


Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 5/23/2007 1:54 PM 29 Australia.txt
-a--- 5/23/2007 1:54 PM 29 California.txt
-a--- 5/23/2007 1:54 PM 29 France.txt
-a--- 5/23/2007 1:54 PM 29 Hungary.txt
-a--- 5/23/2007 1:54 PM 29 Iceland.txt


Directory: Microsoft.PowerShell.Core\FileSystem::C:\temp\foo\Baz


Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 5/23/2007 1:52 PM 31 Menu.txt


Directory: Microsoft.PowerShell.Core\FileSystem::C:\temp\foo


Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 5/23/2007 1:52 PM 26 readme.txt
-a--- 5/23/2007 1:51 PM 9 testfile.txt

This time, notice that the first thing returned is a traversal of the first folder in foo, not the items in foo. In fact, it’s not until you get to the end of the output that you get any results from foo at all, and then only the items that are not themselves traversable.

I’m not 100% sure how to apply this information yet, but I suspect it will be useful to know it someday.

Sunday, May 06, 2007

Growing up too fast...

My oldest daughter is six and has a 'boyfriend'. The idea seemed pretty cute at first. I mean, really, that just means she has a friend that's a boy, right?

  • Not when said boy (7) and my daughter are holding hands every time they're together
  • Not when said boy gets in a fight defending his "girlfriend's" honor
  • Not when said boy thinks your 6-year old daughter is "hot"
  • Not when said boy has a 16-year old brother with a (real) girlfriend

Ah, yes, that's the meat of the problem - 7 going on 16. I thought I had a little time before I became a jealous and watchful father; no such luck.

Thursday, May 03, 2007

Just the data, ma’am

A customer at work requested something that at first seemed like a no-brainer, even for someone who’s SQL mojo is as weak as mine. He has two SQL 2000 databases, one for production use, and one for preview (integration) use, and he wanted to have a scheduled job periodically copy the data from production to preview. The key was that he wanted only the data to be copied; all other objects should be left alone. My first thought was, “Surely there’s a built-in capability to do just this.” Well, not exactly.

First Try
First I tried the Copy Database Wizard, but it requires the source and target databases to be on separate SQL Server instances. No joy there.

Second Try
Data Transformation Services (DTS) has a Transform Data Task which will do this at a table level, but there are 50 or 60 tables in the database, and I didn’t want create all those tasks. That wasn’t going to work.

Third Time’s the Charm (not!)
Further investigation into DTS revealed the Copy SQL Server Objects Task, which allows copying of sets or subsets of objects between databases. That’s what I was looking for. I set up the task for to copy the data from all tables in the source database to the destination database. In the Copy tab, I configured the task to copy only the data in all of the tables, to not create the tables, and to replace the existing data.

Perfect…until I actually tried executing the task and got the following error:

Cannot truncate table ‘<tablename>’ because it is being referenced by a FOREIGN KEY CONSTRAINT

After whacking the keyboard and muttering expletives, I did a little digging and was able to come up with what was billed as a solution for this problem: disable constraint checking before truncating the DBs, do the copy, then re-enable constraint checking. The easiest way is to loop through all the tables and execute the following command for each one:

ALTER TABLE <table_name> NOCHECK CONSTRAINT ALL

Turning constraints back on was a matter of using the same loop, executing the following command for each one:

ALTER TABLE <table_name> WITH CHECK CHECK CONSTRAINT ALL

It did what it was billed to do, but it didn’t solve the problem. I still got the error when executing the copy objects task.

Ahh, Finally!
So I did some more digging, and finally I came up with another way to do this - don’t truncate the tables, delete the data in them using the following SQL command:


DELETE <table_name>

So we still need to disable and re-enable the table constraints, but the process works.


So, Putting It All Together…
From the above information, I created the following DTS workflow:

1. DB Connection: Create an OLE DB Connection to the target database

2. Disable Table Constraints: Create an Execute SQL Task associated with the above connection and with the following SQL code:

set nocount on
declare @table sysname,
@cmd nvarchar(1000)

declare Curs_Tables cursor static for
select name
from sysobjects
where xtype='U'

open Curs_Tables
fetch next from Curs_Tables into @table
while (@@fetch_status=0)
begin
select @cmd = 'ALTER TABLE [' + @table + '] NOCHECK CONSTRAINT ALL'
exec (@cmd)
fetch next from Curs_Tables into @table
end

close Curs_Tables
deallocate Curs_Tables

GO

Workflow: On success, go to step 3.

3. Delete Table Data: Create an Execute SQL Task associated with the above connection and with the following SQL code:


set nocount on
declare @table sysname,
@cmd nvarchar(1000)

declare Curs_Tables cursor static for
select name
from sysobjects
where xtype='U'

open Curs_Tables
fetch next from Curs_Tables into @table
while (@@fetch_status=0)
begin
select @cmd = 'DELETE [' + @table + ']'
exec (@cmd)
fetch next from Curs_Tables into @table
end

close Curs_Tables
deallocate Curs_Tables

GO


Workflow: On success, go to step 4.

4. Copy Table Data: Create a Copy SQL Server Objects Task, setting the source and destination databases. In the Copy tab, configure the settings as follows:

a. Uncheck the Create destination objects checkbox
b. Check the Copy data checkbox, and select Append data
c. Check the Use collation checkbox (depends on your needs)
d. Uncheck the Copy all objects checkbox. Click on the Select Objects button and select all tables.
e. Check the Use default options checkbox (should work for most needs)

Workflow: On completion, go to step 5.

5. Re-enable Table Constraints: Create an Execute SQL Task associated with the above connection and with the following SQL code:

set nocount on
declare @table sysname,
@cmd nvarchar(1000)

declare Curs_Tables cursor static for
select name
from sysobjects
where xtype='U'

open Curs_Tables
fetch next from Curs_Tables into @table
while (@@fetch_status=0)
begin
select @cmd = 'ALTER TABLE [' + @table + '] WITH CHECK CHECK CONSTRAINT ALL'
exec (@cmd)
fetch next from Curs_Tables into @table
end

close Curs_Tables
deallocate Curs_Tables

GO


That should do the trick. Please note that there’s no exception handling here, so if tasks fail, there’s no handling of the condition. This is really just a framework.


I am using this today for a customer database, and it works as a scheduled job. I believe this makes a reasonable framework for this task if you have special requirements. I’d love some feedback on this…

Monday, January 08, 2007

I got new nibs

So I finally broke down and picked up a couple of Richard Binder’s specialty nibs for the Pelikan M200. One of my M200’s had a bad nib, apparently through faulty manufacture (the slit isn’t centered so the tines are uneven) and through carelessness had one tweaked tine and broken the iridium off of the other. It was very scratchy and uneven, and it needed to be replaced.

I bought an ItaliFine nib and a 15-degree left-foot oblique cursive italic (.7mm) nib. Having used only the Pelikan’s fine and medium nibs, these are a real change. The ItaliFine nib is pretty cool. It’s a .9mm cursive italic nib when held normally, and a fine nib when flipped upside down. The fine side is comparable to the Pelikan fine, although somewhat smoother in feel, and perhaps a bit wetter. It’s also rather sensitive to rotation of the pen. If you rotate too much, the edge of the nib hits the paper and all of a sudden you’ve got relatively broad strokes. The italic side writes smoothly, although I find it a bit broad for my taste. It’s also somewhat wetter than I’m used to.

The left-foot oblique cursive italic nib is nice. While the stroke is broader than I’m used to, it’s not TOO broad. It’s very smooth to write with, and the oblique angle fits perfectly with how I hold the pen. And it makes my handwriting look so much better, almost artistic.

One thing that I’m finding a bit difficult to get used to is the requirement to write larger. I have a relatively small hand, and the broader strokes of both these nibs make my normal writing illegible. I’m not sure this is completely a bad thing, as I’m getting older and my eyes are having to work a little harder on to read. But I also use the smaller Moleskine notebooks, and larger writing means less per line, less per page. Maybe I just need to get a smaller italic nib. Hey, Richard, can you help me out…?