Monday, October 8, 2012

Changing SQL Azure database size limit / quota

The starting size limit quota for my SQL Azure database was 1 gigabyte (1 GB).  This particular database has a table with millions of records.  I am semi-manually importing records into that table, so I had been noticing that it was creeping up on that limit with each import batch.

Finally, I ran a batch of INSERT statements and it pushed my database size over the 1 GB quota.  I fully expected that it would give me some quota message and not allow any more inserts.  What I didn't expect is that hitting the quota brought down my whole Windows Azure web site!  Even my Azure-hosted web pages that were not hitting the database started failing to load (I couldn't see the exact error message because remote error messages were turned off, and I never was able to successfully turn them on).

I logged into the billing area of Windows Azure so that I could upgrade my account and increase my database size limit.  I looked all over the billing control panel, but could find no way to upgrade my account to a larger database size.  So I had to email support.

Support was very good.  They emailed me promptly to say that they would call me the next day during business hours.  They called me the next day at 1pm my time.  The support person couldn't tell me why filling up my database caused my web site to crash, but he was able to give me a solution.  He said that I needed to increase the maximum size of my database (no surprise there).  He said that currently there was no way to do that in the Windows Azure control panel (that did surprise me) so I needed to run a script.  He sent me the script to run, here it is (This should be run on the Master database):

ALTER DATABASE My_Database_Name MODIFY (EDITION='WEB', MAXSIZE=5GB)

I asked if I could set it to something between 1 GB and 5GB so that I didn't have to pay for a lot of space that I wasn't using.  But he said that you only have to pay for as much space as you're using, so it didn't hurt to go all the way up to 5 GB.

As a side note, by the time I ran this script, I noticed that my database size was back below 1GB and my site was working again!!  It appears that every day or two some kind of compression routine runs to decrease the amount of space needed by the table.  I'd seen this happen once before where after completing a batch of inserts, the database size was over 900 MB, but when I came back to add some more several days later it was closer to 800 MB.  I asked the support person about that, but he said he was more of a billing support person and I would need to ask tech support about it.  Maybe next time....

Friday, September 28, 2012

Dreamweaver cloaking file extensions

A client of mine has a very large web site built using Dreamweaver templates.  It's literally a few thousand pages of content with an additional hundreds, maybe thousands, of document and media files.  When we want to update the template, its a pretty big chore.

Since we have multiple authors working on the site, the first thing that we need to do before updating the template is to do a GET of the entire site to make sure all the local files are the most recent.  Doing a GET of a site this big can take an hour or two just for the text files.  We try to avoid getting any unnecessary files that will not be affected by the template by using Dreamweaver's cloaking feature.

Cloaking allows you to specify file extensions that you do not want to transfer when you do a Get or a Put of a folder, potentially with many subfolders. It seems cloaking might be easier if we could specify file extensions to include rather than exclude.  We only want files that could be affected by the template, basically .aspx and .html.  But there are dozens of file types that need to be excluded.

Recently I lost my cloaking settings in Dreamweaver and had to rebuild that list of extensions. (Here are Adobe's instructions for using cloaking).  I imagine its a common list for anyone with a site this size with lots of documents and media files.  So I thought I'd share my list of extensions that I exclude with cloaking.  Here it is, in no particular order.  You can enter them into the cloaking settings just like this, separated by spaces, no commas:

.png .fla .flv .wmv .avi .pdf .doc .ppt .jpg .gif .zip .ppsx .wma .csv .pptx .bmp .docx .docm .log .pptm .mp3 .exe .psd .mov .xls .wmf .tif .pub .db .mp4 .xslm .ai .arf .mpg .swf

And a basic overview of what they are:

Images: .png .jpg .gif .bmp .tif
Flash: .fla .flv .swf
Audio and Video: .wmv .avi .wma .mov  .wmf .mp3 .mp4 .mpg
Documents:.pdf .doc .ppt .ppsx .pptx .docx .docm .csv .pptm  .psd .xls  .xslm .ai
Other: .zip .log .exe .pub .db .arf 

Wednesday, September 26, 2012

Android Launcher Icons for PhoneGap not changing

I've had the hardest time changing launcher icons for my Android application built with PhoneGap.  I would change the icons in the res/drawable-hdpi, res/drawable-ldpi, and res/drawable-mdpi, folders in Eclipse, compile my app so that it launched on the phone, and the icon just would not change.  It had worked before, I had a custom icon in there, I just couldn't get it to change again.

So I would just leave the new icon there and give up on it for a while.  Then, seemingly for no reason, the icon would eventually update at some point, event though I hadn't touched it.  Couldn't figure it out for a long time.

Finally, it occurred to me that the icon must be cached somehow.  I tried clearing the application's cache on the phone, but that didn't work.  Then I tried cleaning my application in Eclipse, and that worked.  Just go to Project... / Clean and select your project (or Clean all projects), then next time you compile the app to install on phone, it should be there.

(By the way, for a while I thought maybe I got the size wrong so I was looking for a sizing table.  This is a good page with information on Android icons, but not related to PhoneGap)

Tuesday, September 25, 2012

Everest error "Unable to save given Data" code 8002A415 and "security check" error 800200A5

The "Unable to save given Data" code 8002A415 and "Unable to perform security check for the browser" code 800200A5 errors had our Everest site down for about a week.  We spent a couple days trying to figure it out ourselves and then eventually called Versata's Everest support.  Although Versata took almost two business days to get back to us, they were able to solve the problem in less than 1 business day once they started, so overall I was pretty pleased with their help.

Just a little more background.... our site that is built with the Everest API was able to perform any read-only API calls just fine -- like Session.Open, Create.List, Create.Retrieve, etc.  But if we tried a call that performs a write -- like Customer.Create -- then we would get the "Unable to save given Data" code 8002A415 error.  At first, I wasn't trapping this error, and then I was trying a Session.Close, and that would give me a  "Unable to perform security check for the browser" error.  Not to mention, it wouldn't close the connection, so we quickly would end up exceeding our maximum open connections.  It was a mess.

I was able to duplicate the "Unable to save given Data" error in the API's Object Browser. (though I couldn't recreate the security check error in the object browser)  I made a simple XML test case for Customer.Create and sent this screenshot to Versata.

First, I had a web conference with a Level One support person.  After trying things for about an hour, she was able to determine that the error was related to using an Everest application server that was not on the same machine as the database server.  That made sense because staff weren't having any problems using the Everest application, and it was on the same machine as the database.  Our web server and development server were on different machines.  From there, she moved it to Level Two support.

The Level Two support person suspected a COM configuration problem.  Via web conference, he looked all over the development machine at the COM settings.  We stopped and restarted the Everest COM components, restarted machine, nothing was working.  Then we started looking at the database server.  That made more sense to me since multiple other machines suddenly were having problems using the database server.

After about two hours, the Level Two support person found the COM settings that needed to be changed.  The screen shot is below.  Our IT person wasn't real happy with having to make those settings, he said it left the machine somewhat vulnerable.  And we still don't know why everything was working for over a year and suddenly we had to make those settings.  There were some network changes, and the database server IP address had changed, but we don't specify the IP address anywhere, so it didn't seem it should matter.  Regardless, here are the steps to change those settings on your Everest database server if you need to.  This is for Windows Server 2003 Enterprise:

1. Open "Component Services".  You can either type dcomcnfg from the command prompt, or go to Control Panel > Administrative Tools > Component Services.
2. On the left side tree view, find "My Computer" under Component Services > Computers
3. Right click on "My Computer" and choose Properties
4. Go to the MSDTC tab and click the "Security Configuration" button
5. In the Security Configuration dialog, check every check box and pick "No authentication required" for the radio buttons.

That's it.  If it doesn't work at first, try a reboot of the machine.  Good luck!

Tuesday, August 28, 2012

Everest error "Unable to load requested data" 80020073

Creating XML queries for the List functions in the Everest API (Everest SDK) can be pretty painful.  They give some examples, but if you're trying to select columns or specify conditions that aren't in their examples, then you have to know the correct field name to use.  I think some are listed in an XML schema somewhere, but they're definitely hard to find.  I usually use the EOBrowser to do a Retrieve to see a "full" list of field names. Then I try to use those field names in my query XML.  That works sometimes, but the field names aren't always the same between the Retrieve columns and the XML Query columns used in List.

If you try a column name in your query XML and it isn't right, then usually you will get an error like this:

<ErrorDetails>
<ErrorMessage>
<LineNumber>19</LineNumber>
<Character>24</Character>
<Reason>Union doesn't support this value.
The element: 'Column'  has an invalid value according to its data type.
</Reason>
</ErrorMessage>
<ErrorCode>8001FA00</ErrorCode>
</ErrorDetails>

But occasionally, you get this exceptionally vague error:

<ErrorDetails>
<ErrorMessage>Unable to load requested data</ErrorMessage>
<ErrorCode>80020073</ErrorCode>
</ErrorDetails>

That error does not make it clear at all that one of your column names is wrong.  Maybe the error shows up in other situations too, but for me it was just a column name.  In my case, I was trying to make a query condition using
<Column>Location</Column>
but the correct column name for Location in the XML query conditions is
<Column>LocSubLoc</Column>

Thursday, July 5, 2012

Android SDK / Eclipse Hell

Article Summary:
1) Its not that hard to uninstall and reinstall Android SDK and Eclipse, so if your environment gets messed up, try that sooner rather than later
2) The quality of the USB cable apparently can make the difference in the success of deploying an app to your phone.

I've been working on an Android application for several months.  When I first set up Eclipse and the Android SDK on my Windows computer, it was pretty much a breeze and I was up and running quickly.  The phone simulator worked well, and when I wanted to transfer the application to the phone I plugged the phone into computer via USB and ran the project in Eclipse and the app transferred with no problem.

Then for seemingly no reason, I started getting errors when trying to transfer the application to the phone.  I tried all the easy things that had worked for me before: unplugging phone and plugging back in; turning off phone and turning back on; rebooting computer.  Nothing was working.  I was getting a timeout error, like "java.io.IOException: Unable to upload file: null" and "Launch cancelled". Another recommendation I found was to increase the timeout interval.  No luck.

I decided to install the latest Android SDK.  That's when the real hell began.  Although I'd only been developing the app for a few months, I'd played with Eclipse and Android about a year earlier, so I had an old version of Eclipse (Galileo) and the Android SDK (v7).  As soon as I updated the Android SDK to v20, running the project started giving me errors saying that the Android SDK version required some newer version of an Eclipse file.  So then I had to update Eclipse, to Helion.  Then I got lots of new errors.  I was getting pretty stressed out and things get fuzzy at this point.  But the errors were about versions again.  I saw recommendations to run "Install new software.." in Eclipse and run the Android AVD Manager.  The AVD wouldn't even run!  I think those errors said something about missing files in the  "tools" or "platform_tools" folders of the SDK, and sure enough, the new version of the SDK had some different folders and files, so I copied some things from the old SDK folders to the new ones and after much pain, finally got the AVD running again and ran updates of Android and Eclipse.

*THEN* I went into the hell of this error: "This Android SDK requires Android Developer Toolkit version 20.0.0 or above.  
Current version is 18.0.0  Please update ADT to the latest version."
I ran the AVD and Eclipse "Install new Software..." for https://dl-ssl.google.com/android/eclipse over and over, but couldn't get to upgrade the ADT to v20 because it thought that I already had the latest version.  (This could have had something to do with the sketchy file and folder moving I did, mentioned above).  Eventually I tried manually moving files out of Eclipse's plugins and features folders to try to get Eclipse to believe I needed an update.  It seemed to work at first, but never really quite worked.

Eventually I resigned myself to reinstalling both Eclipse and the Android SDK.  I wish I would've done that many many hours earlier.  I'm always intimidated by reinstalling on Windows because it can take a while and messes with the registry and just seems like it can leave your system messy and unstable.  But what I didn't realize is that Eclipse and Android SDK aren't normal Windows programs.  They don't alter the registry or copy DLLs into weird places.  Uninstalling them is as simple as renaming or deleting the existing folders.  And installing is pretty much just unzipping a downloaded folder to a location that you choose.  So once I finally decided to go that route, then run the Android AVD and Eclipse "Install New Software...", everything reinstalled nicely and I had the latest versions of Android and Eclipse. I could start developing again!

I made some changes to the app, plugged in the phone to deploy to the phone, AND I GOT THE SAME TIMEOUT ERROR AGAIN!!!!  Hours of work struggling with the environment and it made no difference.  So now I thought it has to be my phone, so I tried my wife's phone, same thing!  I was so frustrated.  Started searching the web again and found a post on stackoverflow where someone mentioned that switching USB cables fixed the problem for them.  I tried my wife's USB cable and IT WORKED!!!  The application transferred first try with the new cable.  It seems my cable went bad at some point and all that hell was for nothing.

After all that ranting, the main points of this post are:
1) Its not that hard to uninstall and reinstall Android SDK and Eclipse, so if your environment gets messed up, try that sooner rather than later
2)  The quality of the USB cable apparently can make the difference in the success of deploying an app to your phone.  Aaaargh.

Tuesday, July 3, 2012

Google Analytics: No historical data, only real-time

This one just about drove me crazy.  I had set up Google Analytics on a few other sites with no problem.  I recently set it up on a new site and got it working fine too.  We don't check the stats too often so it was a few months later that we noticed that all the historical data was gone.  The status still looked good and I compared the Analytics code in our pages with what the Analytics site said that we should be using, and they were the same.

Somewhere in those few months, Google had launched the beta of their real-time stats.  So you can look at the real-time chart updating every second, go view the web page you're tracking, and instantly see that view show up on the real-time chart.  Very cool.  What was REALLY weird is that if I did that, I could see my real-time traffic but then the next day there was still nothing in the historical stats.  At first I thought the real-time launch was affecting our historical data and maybe it just needed a few days to get the bugs ironed out.


After doing some Google searching, I saw a few people suggest that it could be there was a filter applied to our data that was filtering all data out.  I didn't think that sounded right because a) I was still seeing real-time data, and b) I was pretty sure there was no filter applied in our account.  Well... turns out it was a filter, and a) filter's don't affect what you see in the real-time graph, and b) the only reasons I thought we didn't have a filter were because I hadn't set it up and I didn't see it in the Analytics dashboard.  Turns out a co-worker had set up the filter and I just hadn't looked hard enough to find the filters section in Analytics.  So here is how to find the Filters section.


1) Log in to Analytics.  When I log in, I get a list of the sites I'm tracking.  


2) Click on the site you want to view/modify. (That may just expand a folder, then you click on the site name again under the folder).  Then I land on the "Standard Reporting" tab.


3) On the far right of the horizontal tab navigation, click "Admin"


4) This is the part that confused me.  On this Admin page, there is a row of tabs: Profiles, Tracking Code, Property Settings, Social Settings.  Be sure that Profiles is highlighted, then below that you will see a second row of tabs with "Filters"




So if, like me, you are seeing no historical data then here is where you can remove or fix your filter(s).  In our case it was an "Include" filter set up for a subfolder, so only traffic from that subfolder was included in historical results, and it had no traffic.  If you do want to view different data sets using different filters, then you need to make a new Profile for each filtered set that you want.  I think its best to have no filters in your default Profile, then create a new Profile anytime you want to add one or more filters.

Monday, June 4, 2012

SQL Azure speed test - multiple inserts

I ran an interesting test on SQL Azure database tonight.  I'm trying to insert about 3.5 million random test records into a table with a script.  The script was initially calling each INSERT statement individually.  When I tested locally, it went quite fast, like about 1000 inserts per second.  I'd seen some posts where people had estimated that speed for SQL Server (not Azure) so I thought I'd achieve the same on Azure.

What I found was that I only achieved about 122 individual insert statements per minute on SQL Azure.  And adding to that frustration was that the script that generated the INSERTS kept timing out after 2 minutes, so I had to sit and keep refreshing it. I ran it about 8 times before I realized that it would probably be much faster to put a long list of values into one insert statement rather than inserting one row per statement.  That was confirmed by this post on StackOverflow.  But I had already started a table to measure results, so I ran it a couple more times for nine tries total.  The results are below.

When I changed my script to insert about 300 rows per INSERT, that put me over 200 rows per second, but still not as fast as I was hoping.  At least I changed the script timeout as I learned from this post so that I could refresh the page a little less than every 2 minutes.

(Update: After I later switched to 1000 rows per INSERT, I randomly got 3 trials of really good results, almost 1000 rows inserted per second! Then it was back to below 200 again the next time. Maybe it has to do with server load and I got lucky to finally hit a dead patch for half an hour.)

Note that SQL Azure is not dependent on the size or number of instances that you choose for your compute services.  I don't believe there is any way to customize the hardware specs for SQL Azure.  But of course you can tune your own database with indexes, etc.

I was also interested in the storage size of my table since SQL Azure charges by storage space. My table is pretty small with the following field types (sizes acquired from this helpful post):
int (PK) (4 bytes)
smallint (2 bytes)
float (4 bytes)
float (4 bytes)
smalldatetime (4 bytes)
int (4 bytes)
int (4 bytes)

Seems like it should only be 26 bytes per row.  Instead, I was getting about 68 bytes per row, over double what I expected!  And that was before I added another index.  Adding one index with two fields bumped it up double again to 136 bytes per row.  I expected there might be some overhead, but nowhere near that much.

Here are those initial insert test results from the INSERTs:

Total Storage Size (MB)Total RowsRows inserted in 2 min.Inserts / secBytes / row
00

1.191722717227143.5669.08

2.553723320006166.7268.49

3.434997812745106.2168.63

4.366356013582113.1868.60

5.0273273971380.9468.51

5.6482204893174.4368.61

6.859997217768148.0768.52

7.591106421067088.9268.60

9.0513194121299177.4968.59


Average
122.17

Friday, May 4, 2012

Google Apps email : new account for alias domain

Well, here's the very short version of what I learned the other night: if you create an email / GMail account for your primary domain in Google Apps, it takes an hour or two for the same account to automatically work for your alias domain.

Now, the long version....I run all of my email for my kcwebprogrammers.com domain through Google Apps mail servers.  This is done by setting up a Google Apps account and then routing the mail DNS records to Google server specified in your Apps account.  I also have what Google Apps calls a domain alias.  This is another domain that is assigned to the same Google Apps account, but it shares mailboxes with the primary domain.  In other words, if you create a new account / mailbox for the primary domain, then that same account will work on the alias domain too.

So I created a new email / Gmail account for kcwebprogrammers.com, sent it a test email, and it worked.  Then I sent another test email to the same account at the alias domain name, and it failed.  I couldn't figure it out since the Apps help on alias domains clearly says
"By adding a domain alias, you give every user in your domain a second email address (with the alias after the @). For example, if your domain is solarmora.com and you add solarmora.net as a domain alias, every user@solarmora.com will also receive mail addressed to user@solarmora.net."


After looking through all my Google Apps settings, looking through settings on my domain registrar, and even my web host, I did find something else in Google's help files saying that a different kind of email alias could take up to an hour to activate.  So I waited and tried it again the next day and it worked!  I wasted an hour or so looking at settings and help files, but as my wife reminds me, at least I learned a few things.

asp.net SelectParameters: use variable from code

Today I was building a GridView and wanted to set the SelectParameters of the SqlDataSource to use a value that was being sent in the query string.  But then I realized that the value I needed was not being sent in the query string and I didn't want to change the calling page to send the parameter that I needed.  What I wanted to do was to run a query in Page_Load to get the value I needed, store it in a variable, and then assign that variable to some SelectParameter.  But I'd never set the value of a SelectParameter to a value stored in a variable.

When I did a search, I found this solution that uses a generic .  That looked promising, but I couldn't get it to work.  I figured I could copy my variable into a Session variable, and then use a , but that seemed like overkill to create a Session variable just for that page.

The solution ended up being pretty simple.  What I did was:

  •  create a TextBox control on the page
  • set the TextBox Text property to the value of my variable obtained from the query in Page_Load
  • use a ControlParameter for the SelectParameter of the data source and specify that TextBox control as the control from which to get the value
In my case, it turned out that I could keep the control visible on the page because it was useful information, but if that isn't option for you then you could just set the control's visibility to false.

Tuesday, April 17, 2012

ASP.Net: Dependent DropDownList filters


Twice now I've had to build forms with multiple cascading DropDownLists, each one filtering the next. So the list of values in the second DropDownList depend on the selection in the first, the list of values in the third depend on the selection in the second, etc.
First I tried using just code in the .aspx file, with the SelectCommand of the SqlDataSource for the second DropDownList using a ControlParameter based on the first DropDownList.  If I set the AutoPostBack of the first list to "true", then when I updated the first, the second would filter correctly.  That was good.
But I found it would only work the first time a value was selected in a DropDownList.  After that I couldn't get a list's values to change again. In other words, when I first loaded the form and chose a value for first DropDownList, the second would populate...when I chose a value for second, the third would populate...but if I then changed the value of the second, I couldn't get the third to re-populate again. It always got stuck with the first set of values that were DataBound to it.
So I built a small function called RebuildDropDown that I call from the OnSelectedIndexChanged handler in the .aspx.cs file. In the handler for a given DropDownList, I call RebuildDropDown for each dependent list.  Here is the code example for a School/Class/Student set of filters.  Each DropDownList should have its SelectCommand and DataSource set elsewhere and should have AutoPostBack set to "true" and use handlers similar to these for OnSelectedIndexChanged
.aspx.cs:
protected void SchoolBox_SelectedIndexChanged(object sender, EventArgs e)
{
    RebuildDropDown(ClassBox);
    RebuildDropDown(StudentBox);
}

protected void ClassBox_SelectedIndexChanged(object sender, EventArgs e)
{
    RebuildDropDown(StudentBox);
}

protected void RebuildDropDown(DropDownList ddl)
{
    ddl.Items.Clear();

    // OPTIONAL: add "ALL" option as first option
    ListItem li = new ListItem("All", "ALL");
    ddl.Items.Add(li);

    // now add databound items
    ddl.DataBind();
}

Wednesday, April 4, 2012

SSL: Resources loading http vs. https

I went to view the SSL certificate on one of my sites today and ended up learning a few things about SSL / https and Google's Chrome browser. I knew that if I clicked the lock next to https in the address bar that it would display information about the certificate, like the issuer name and expiration date.  In this case it also had a warning for me that said "However, this page includes other resources which are not secure." Basically this means that although your main page is loading securely encrypted with https, some resources, like images or scripts, are loading via http so they are not secure and are loading unencrypted.

So the first thing I learned is how to tell which resources are loading without https.  There are actually a couple ways to do this in Chrome, but here is the one I used:


1) In Wrench menu, choose Tools > Developer tools

2) Click on "Resources" toolbar icon

3) Expand the Frames folder to see the different pages. Expand the page whose resources you want to see. The individual Resources for the page are then listed, broken down by Images, Scripts, and Stylesheets

4) To see the URL that was used to load that resource, just hover the mouse over the resource name and the URL will appear, either with http or https. You can also click on an image name to see the image on the right side, along with its URL

For me, some of the culprits were background images with absolute URLs in the CSS page.

So once you change all the resources loading with http to https, then you would think that message should go away when you refresh the page, right?  Actually, you have to fully close the browser and then open it back up and go to the page again for the warning to clear.  That was the second thing I learned.  I learned that on StackOverflow.

And lastly, there is also a visual cue in the address bar so that you can tell whether everything is loading securely without having to click the lock and look for the message.  When all resources are loading securely with https (and you've closed and reopened the browser and reloaded page) then the lock and the "https" text in the address bar will be green:
If you have images that are not loading securely, the lock is white and has a little yellow warning icon on it:
And when you have a script that is not loading securely, you get a white lock with a red X and red https text:


Tuesday, March 27, 2012

GridView: sort with OnSorting

A lot of posts that I've made are about ASP.Net's GridView.  I use it all the time, its the backbone of reporting in ASP.Net.  It does a lot of things very well and easily.  One of the things it completely handles is sorting the rows of the grid by an column just by clicking on that column header.  All you have to do is set AllowSorting=true, set the SortExpression of each column to the name of the field for that column, and you get sorting....but there's one catch.  You have to be using a data source that supports automatic sorting, like SqlDataSource, otherwise you'll have to handle sorting on your own.  I have a GridView that uses a DataSet as the data source, so I had to write something myself.

It *still* isn't that hard if you just want to be able to sort Ascending anytime someone clicks the header.  But the sort functionality that GridView provides with SqlDataSource is that the first click on a header sorts Ascending, but another click on the header will sort Descending.  To duplicate that takes a little more work using some ViewState or Session variables.

Here's a solution that I came up with that I thought was pretty clean.  The basic ideas are:

  • Create your unsorted SQL statement in your Page_Load and assign it to the member variable of the class.  That way you can use it in the initial population of the grid in Page_Load and in your OnSorting handler that is called in the PostBack's 
  • Create your ORDER BY clause based on the previous ORDER BY clause, which you will save in the ViewState of the page.  You could save it in a Session variable, but ViewState is better since you only need it on this one page

Here's the code for the function to handle your grid's OnSorting event.  Just set  OnSorting="reportGrid_Sorting" in your grid



protected void reportGrid_Sorting(object sender, GridViewSortEventArgs e)
{
    // m_unsortedSql has already been created in Page_Load
    string orderByClause = e.SortExpression;
    string previousOrderByClause = "";
    if (this.ViewState["orderClause"] != null)
    {
        previousOrderByClause = this.ViewState["orderClause"].ToString();
    }
    if (previousOrderByClause == e.SortExpression + " ASC")
    {
        orderByClause += " DESC";
    }
    else
    {
        orderByClause += " ASC";
    }
    // now we have the ORDER BY clause that we want.
    // save it to the ViewState and add it to our unsorted SQL
    this.ViewState["orderClause"] = orderByClause;
    string sortedSql = m_unsortedSql + " ORDER BY " + orderByClause;

    SqlConnection myConnection = new SqlConnection(ConfigurationManager.ConnectionStrings["myConnectionString"].ConnectionString);
    SqlDataAdapter myCommand = new SqlDataAdapter(sortedSql, myConnection);
    DataSet ds = new DataSet();
    myCommand.Fill(ds);

    reportGrid.DataSource = ds;
    reportGrid.DataBind();
}


Monday, March 26, 2012

Google Docs: Hiding toolbar for viewers

One of my clients had a spreadsheet that they wanted to let the public view, but not edit.  I've done a lot with sharing Google Docs, so I knew that all you had to do was go to Share and specify " Anyone who has the link can view ".  The problem with that is that it still shows the menu bar and the tool bar to the viewer.  They are disabled / grayed out, but they are still there taking up space, and might be confusing to some people.  My client asked if we could hide the toolbar from those who only had permission to view.  It really seemed like Google Docs should be able to hide them for all viewers, but it took me a while to find the solution.

The solution was in "Publishing" the Google Doc.  What publishing does is create a static online non-Google Doc version of the document.  This can be a web page, a PDF, or other types of files.  By default, every time you change the Google Doc, the document will be re-published so that your published version always has the latest edits.  There are lots of options for what kind of file to publish to, but we wanted a web page, so that's what we chose.  And just as we wanted, the web page has no menu or toolbar controls on it.

To publish a Google Doc, just go to the File menu and choose "Publish to the Web".  Then you can select what type of file you want to publish to and get the link to provide to your viewers.

Wednesday, February 8, 2012

Everest: The number of users that simultaneously access SDK cannot exceed the 25 user(s) for this license

I have a client for which I use the Everest SDK to connect to the Everest database from a web application.  Last week, we started getting this error on the site:

"The number of users that simultaneously access SDK cannot exceed the 25 user(s) for this license"

Essentially, all 25 allowable connections were used up, so my web pages couldn't connect to do anything.  So none of the pages on the site that connected to Everest would work.

I thought that maybe I had a place in my code where I wasn't closing a connection and the open connections had built up to 25.  The SDK has a Clear method to clear sessions, and can be accessed from the SDK object browser, so I figured I'd just go in there and clear some sessions.  The catch was that I couldn't even log into the SDK object browser because there were no connections left to log in with.  We tried rebooting the Everest server, we tried rebooting the web server, but neither one cleared the sessions.

Finally we filed a support ticket.  After a couple days, we received this reply:

"I wanted to reach out to you to give you an update on your ticket.
You wanted to remove user session of SDK.
Please ensure all users are logged out from Everest.
Please open SQL Server Management Studio and select database EVEREST_SYSTEM .
Select 'New Query' and run the command "Delete from EVEREST_SPM" .
"

That did the trick.  All connections were cleared and the web site was able to connect to Everest again via the SDK.  The support technician also said that SDK connections are not separate from normal Everest connections.  So that makes it sound like the problem could have been solved by just forcing a few Everest application users to log out.  Seems like the reboot would have done that.  But we'll look into that solution more next time it happens.

Friday, January 27, 2012

GridView: Grouping with Subtotals

The issue has come up a couple times on a site I work on that we would like to be able to have a GridView that provides subtotals based on grouping by a certain field.  So we wanted it to sort and group by one field -- Publisher, for example -- then for all publishers of the same value, provide a subtotal row under the last row for that publisher, then continue on with the next group of Publishers.  Basically, this is the same type of grouping that Microsoft Access provides in its reports automatically, but it is not provided as a standard feature of GridView.

So I began looking for a solution.  I found one here that was pretty close to what I wanted.  It shows how to add a subheading after each group, but doesn't do any totalling of numeric fields:
Including Subheadings in a Datagrid

So I modified that solution to also provide subtotals on the subheader line.  I use a hash table to store all the different totals.  (I have another post on creating a footer row with totals and this solution ties in pretty tightly with that.)  And the other modification I made is to make the grouping field dynamic so that it can easily be changed by sending in different parameters in the URL.  Here it is:


        protected Hashtable total;
        protected string m_groupFieldName = "";

        protected void Page_Load(object sender, EventArgs e)
{
            // get grouping type
            string groupType = Request.QueryString["group"];
            if (groupType == "A")
            {
                m_groupFieldName = "A";
                ReportTitle.InnerHtml = "Grouped by A";
            }
            else if (groupType == "B")
            {
                m_groupFieldName = "B";
                ReportTitle.InnerHtml = "Grouped by B";
            }

   string sql = "YOUR SQL HERE";
            SqlConnection myConnection = new SqlConnection(ConfigurationManager.ConnectionStrings["EMTConnectionString"].ConnectionString);
            SqlDataAdapter myCommand = new SqlDataAdapter(sql, myConnection);
            DataSet ds = new DataSet();
            myCommand.Fill(ds);

            string curCat = "", prevCat = "";
            int i = 0;
            DataRow subheaderRow = null;

            // Loop through the rows of the data adding placeholder rows into dataset
            while (i <= ds.Tables[0].Rows.Count - 1)
            {
                curCat = ds.Tables[0].Rows[i][m_groupFieldName].ToString();
                if (i == 0)
                {
                    prevCat = curCat;
                }
                if (curCat != prevCat)
                {
                    // categories changed, so make new row
                    subheaderRow = ds.Tables[0].NewRow();

                    // for the field on which we're grouping, display the grouped value and " TOTAL"
                    subheaderRow[m_groupFieldName] = prevCat + " TOTAL";

                    // set the value of each numeric field to the running subtotal calculated below
                    // or to a calculation based on those subtotals
                    subheaderRow["NumericFieldA"] = GetTotal("NumericFieldASubTotal");
                    subheaderRow["NumericFieldB"] = GetTotal("NumericFieldBSubTotal");
                    subheaderRow["APctOfB"] = (GetTotal("NumericFieldASubTotal") / GetTotal("NumericFieldASubTotal")).ToString();

                    ds.Tables[0].Rows.InsertAt(subheaderRow, i);

                    prevCat = curCat;
                    i += 1;

                    total.Clear();
                }

                // keep running totals for numeric columns
                AddToTotal(ds.Tables[0].Rows[i]["NumericFieldA"].ToString(), "NumericFieldASubTotal");
                AddToTotal(ds.Tables[0].Rows[i]["NumericFieldB"].ToString(), "NumericFieldBSubTotal");

                i += 1;
            }

  // make the final subtotal row
            subheaderRow = ds.Tables[0].NewRow();
            subheaderRow[m_groupFieldName] = prevCat + " TOTAL";
            subheaderRow["NumericFieldA"] = GetTotal("NumericFieldASubTotal");
            subheaderRow["NumericFieldB"] = GetTotal("NumericFieldBSubTotal");
            subheaderRow["APctOfB"] = (GetTotal("NumericFieldASubTotal") / GetTotal("NumericFieldASubTotal")).ToString();
            ds.Tables[0].Rows.InsertAt(subheaderRow, i);

            reportGrid.DataSource = ds;
            reportGrid.DataBind();

}

        protected void ReportGrid_RowDataBound(Object sender, System.Web.UI.WebControls.GridViewRowEventArgs e)
        {
            if (e.Row.RowType == DataControlRowType.DataRow)
            {
                string subheadFlag = DataBinder.Eval(e.Row.DataItem, m_groupFieldName).ToString();

                // See if we have a Subheader
                if (subheadFlag.Contains(" TOTAL"))
                {
                    // Format the row
                    e.Row.Font.Bold = true;
                    e.Row.BackColor = Color.LightSalmon;
                }
            }
        }
        protected double GetTotal(string type)
        {
           try
            {
                 double result = double.Parse(total[type].ToString());
                 return result;
            }
            catch
            {
                return 0;
            }

         }
        protected double AddToTotal(string itemStr, string type)
        {
            double item;

            if (itemStr == "True")
            {
                item = 1;
            }
            else if (!double.TryParse(itemStr, out item))
            {
                item = 0;
            }

            if (total.ContainsKey(type))
            {
                total[type] = double.Parse(total[type].ToString()) + item;
            }
            else
            {
                total[type] = item;
            }

            return item;
        }