Tuesday, March 1, 2022

How to check the sharepoint 2010 workflow status

How to check the sharepoint 2010 workflow status

Hello,

Is there any way to know the workflow status, either it is completed or not. I want to check this status under"OnTaskChanged" event in visual studio 2010.

Thanks in advance.



  • Edited by Rauf Ab Tuesday, February 28, 2012 6:13 AM

Reply:

------------------------------------
Reply:

Thanks,

I have checked this article which do not fulfill my need. Can you please correct the following statement, This code produce null in "StringTaskStatus" variable.

GuidstatusField=workflowProperties.TaskList.Fields["Status"].Id;
StringtaskStatus=onApprovalTaskChanged_AfterProperties1.ExtendedProperties[statusField].ToString();

Thanks in advance.


  • Edited by Rauf Ab Tuesday, February 28, 2012 7:30 AM

------------------------------------

Accessing SCCM database with Access 2010

Hi All

An interesting discussion I have gotten involved in here and was wondering what the wider community thought so I figured I would start up a discussion around it.

We have some developers working here that want to pull data from the SCCM database using an Access 2010 database they have created for application management and also to do some other stuff. From what I understand they won't be adding data to the SCCM database or changing any of its tables, etc.

I know from past experiences that when it comes to the SCCM database its always best to leave it alone and utilise the reporting node in the SCCM console to obtain data from the database.

What are other peoples thoughts?

Is it ok to access the SCCM database with an external program such as Access 2010 as long as it is in a read-only capacity and are there any serious issues that could be encountered when doing this?


Reply:
Sure. That's all a reporting point or SRS reporting point are doing: querying the database to show information. As long as you are not mucking with the DB itself, querying it is perfectly acceptable using whatever tool you wish.

Jason | http://blog.configmgrftw.com | Twitter @JasonSandys


------------------------------------

Sync iPad 2 with Microsoft Business Contact Manager - How to?

How do I Sync iPad 2 with Microsoft Business Contact Manager? 

I can sync Outlook Contacts......but Business Contact Manager does not SYNC.....

Our sales team needs to see BCM Contacts on the road or we have to return our iPADS

Help!

  • Changed type Max Meng Wednesday, February 29, 2012 4:11 PM 3rd-party issue

Reply:

Try the trail version see if it will work with iPad 2:

http://www.companionlink.com/ipad/outlook/

Please Note: The third-party product discussed here is manufactured by a company that is independent of Microsoft. We make no warranty, implied or otherwise, regarding this product's performance or reliability.


Max Meng

TechNet Community Support


------------------------------------
Reply:

Max - I tried Companion Link and it sort of works.....however it put ALL contacts into ONE folder.

So My Business Contacts, Personal Contacts and the other 5 Contact folders all end up in the same giant file. 

Bummer!   Thanks for trying!  Gene


Gene Grodzki

  • Edited by Max Meng Tuesday, February 28, 2012 4:56 AM edit improper words

------------------------------------
Reply:

Hi Gene,

For this issue, you should either contact with the Technical Support for Companion Link:

http://www.companionlink.com/support/contact.html

Or post your question in a forum discuss issues about iPad:

http://www.apple.com/support/


Max Meng

TechNet Community Support


------------------------------------

What is the Best PowerShell Script Editor?

Is there one out there?

Reply:

I use and have used Sapien PrimalScript for quie some time.  I have tried all of the other editors (I think) and always come back to PS.

Editors are both a matter of functionality and style/taste. 

I need a good debugger plus I need an editor that will edit almost every kind of file. PrimalScript can edit dozens of file types.  The ones I use frequently are VBS, PS1,PSM1, JS, ADM, XML, TXT, HTA, HTML, Batch, ASP, CS, C, SQL, VB

This is only about one third of teh file types available.

There are debuggers for PS1, VBS, JS.  You can assign an exernal debugger to any file type that has a debuuger.  Many scrip products come with debuggers but have weak editors.

PS has support for projects and packaging to EXE. 

I like the ability to save/read files to and from an ftp site.

As a secondary editor for PowerShell both PowerShell_ISE and PowerGUI are usable but I don't get along with either of them very well.

All in all PrimalScript has more features that any other editor available.

I used MSEdit, Pedit,Brief and other editors like them for years.  All of them have disappeared.

 

 


jv

------------------------------------
Reply:

There are several out there, each with it's own unique flavor. The majority, if not all, will give you a trial download to use and evaluate for your own needs.


------------------------------------
Reply:

Hi,

(Opinion follows, based on my knowledge of PrimalScript 2012)

PrimalScript is good as a special-purpose editor as it provides late-binding code completion and WSH script debugging. However, in my opinion, it is a weaker general-purpose text editor due to limited configurability (for example, there is no support for multi-keystroke commands such as Ctrl-K D as in WordStar/EMACS, and you cannot modify the language configurations). Its regular expression matching is also pretty weak, IIRC. In other words, in the general-purpose text editing department, PrimalScript falls short, in my opinion. If these kinds of issues aren't a problem, PrimalScript might be a good (albeit rather expensive) choice.

OTOH, if you need very powerful general-purpose text editor, I would recommend something like EmEditor or possibly UltraEdit. As ScriptingWife said, you can download evaluations of these software packages and find one you like.

Bill


  • Edited by Bill_Stewart Friday, December 13, 2013 8:05 PM Updated post; PS still does has not fixed the issues I mentioned

------------------------------------
Reply:

PrimalScript 2011 has been enhanced. I believe most or all of the issues Bill addresses have been upgraded.

The keystrike issue is not an issue.  All keystroke in PrimalScript are configurable.  THe base set is the standard (pretty much) Microsoft Windows key access set.

The Regex issue is gone with a new Regex provider I am told although Sapienhasn't posted the Regex documentaion for the provider so that is still pretty much all guess work.

I use PrimalScript as a general purpose text edito all of the time.  It is almost as good as Brief was. 

As a PowerShell Editor it is superior to almost all general purpose editors.  The topic, after all, is, "What is the best PowerShell Script Editor?"

I do recommend downloading and trying more thanone editor and consider all issue besides just PowerShell.

If you want debugging thenthe field is narrowed considerably as most general pupose editors do not have syntax highlighting, Intellisense or debugging.

Wher UltraEdit is good is if you ned a cross-platform editor.  As its ads declare, it is mostly a cross platform replacement for Notepad that can do hex.  As a code editor it uses a word file to highlight and do completion.  POwerSHell and other modern editors use reflection and syntax maps to do this.  They are faster and more accurate.

PowerShell completion and highlighting uses reflection  This allows us with COde files to load libraries and have themshow up highlighted and supporeting auto-complete General purpose editors cannot do this.

One plus of a GP editor is that they are inexpensive.  As little as $6.99 per license.  If you just want great text editing with very basic code support then these text editors are very good.

If you want inexpensive try free from Sapien. 

PrimalPad 2011 now edits and executes PowerShell scripts and is free for the downloading.  PrimalPad is a completely independent editor that requires no installation.  Just save it to a thumb drive and it is ready to go with you.  In 4Gb you can carry all of your script library and the editor ready for use on any machine with a USB port.

 

 


jv

------------------------------------
Reply:
The keystrike issue is not an issue.  All keystroke in PrimalScript are configurable.  THe base set is the standard (pretty much) Microsoft Windows key access set.

Not an issue to you, perhaps. Can PrimalScript now use multi-keystroke commands? If not, I personally can't use PrimalScript day-to-day as a general-purpose editor because it just is not configurable enough for my needs (particularly at its price point). As another example, the inability to configure the Home key to jump to always jump to column 1 (instead of the first non-whitespace character on the line) makes PrimalScript nearly unusable to me as a general-purpose editor.  But as I said, it's just my opinion. To each his own.

Bill


------------------------------------
Reply:
The keystrike issue is not an issue.  All keystroke in PrimalScript are configurable.  THe base set is the standard (pretty much) Microsoft Windows key access set.

Not an issue to you, perhaps. Can PrimalScript now use multi-keystroke commands? If not, I personally can't use PrimalScript day-to-day as a general-purpose editor because it just is not configurable enough for my needs (particularly at its price point). As another example, the inability to configure the Home key to jump to always jump to column 1 (instead of the first non-whitespace character on the line) makes PrimalScript nearly unusable to me as a general-purpose editor.  But as I said, it's just my opinion. To each his own.

Bill


Bill,

 

If you need WordStar like keystrokes then PrimalScript is not for you.  It is NOT  general purpose text editor.

Most peole do not need or use those keystrokes.  Evenmmost experinced Windoes users and techs do not use teh keyboard much.  I have been lobbying them to learn a little keyboard.

Like you I started with editors that were more configurable.  The ones that are today do not do code editing very well.  PrimalScript is a very good code editor with an excellent debugger.  I have learned o get along with its keyboard.  I do wish it would supoort emacs and Brief completely. Maybe next year.

I iked Brief because I could import keyboard customizations.  I could eben use it as a VT100 ansi terminal with edit capabilty. That was pretty neat.

 


jv

------------------------------------
Reply:

To me, code editing is a subset of general-purpose text editing. Granted, PrimalScript is much more than a text editor; it's more of a scripting IDE. That said, though, I would make the case that an IDE (especially one at PrimalScript's price point) should be more configurable for power users, not less. PrimalScript has some very nice value-added features but is expensive. It is not my preferred code editor choice due to very limited configurability and limited regular expression support. (My opinion of course)

As has already been mentioned, users can download trial versions of software and choose for themselves.

HTH,

Bill


------------------------------------
Reply:
i've tried both primalscript and powerGUI, for price, i like the powergui script editor much better. I think primalscript is more complicated than it really need to be.
Z-Hire -- Automate IT Account creation process
Z-Term -- Automate IT account termination process

------------------------------------
Reply:
Did you try PowerShell SE

PowerShell SE - easy to use script editor, debugger and help viewer for Windows PowerShell (based on the PowerShell ISE Code Editor).

------------------------------------
Reply:

Take your pick!  :)

List of PowerShell Script Editors


Rich Prescott | Infrastructure Architect, Windows Engineer and PowerShell blogger | MCITP, MCTS, MCP

Engineering Efficiency
@Rich_Prescott
Windows System Administration tool
AD User Creation tool


------------------------------------

Direct Access not working on a desktop computer

I have a problem with a computer running Direct Access. Direct Access was running fine on this computer until six weeks ago. Since then

I have been beating my head on this. The Direct Access connectivity assistant is saying that it is not working. When I look at the drives in network location, the File Server drives do not show up. If I do a remote desktop connection to the file server from this computer,I have no problem getting in. The only error that I could find is port 445 is not open.  I have try another computer at this location and it gets in fine. So it has to be something with the computer

  Thanks and have a great day

    Bob


Reply:

Hello Bob

Could you please answer the following questions so that we can proceed further on this:

1. What is the transistion technology being used?

2. Is the client able to get an IPV6 addres?

3. Is the Ipsec tunnel establieshed?

4. Is it happening for one client or more than one?

5. Do you see any errors in the event viwer security TAB?


------------------------------------

Location of Visual Studio 'wizards' for MVC Classes

Hello

Is it possible to manually install the MVC Entity Framework Code First Wizard in Visual Studio 2008?  Can you please tell me the steps I might need to troubleshoot where my issue might be?  When I attempt to Add a Controller (like the article in Scott Guthrie's Blog tells you to in his 'code first' entry), I don't get the "Add Controller" wizard dialog where it asks me for a Scaffolding Template, Model Class, Data Context Class, and View.

Thank you


Night Skywatcher a/k/a David Diaz


Reply:

Hello Destrehandave,

Thanks for your post. However, I am sorry that here is not the appropriate forum for you to deal with your issue. I think you'd better reopen one case on the ASP.NET MVC forum here: http://forums.asp.net/1146.aspx/1?MVC

Thanks.


Vicky Song [MSFT]
MSDN Community Support | Feedback to us


------------------------------------

SharePoint 2010 Fast Search for SharePoint Crawl Remotely : The PowerShell Script

The PowerShell Script

$userName = "DOMAIN\serviceAccount"
$passWord = ConvertTo-SecureString "password" -Force -AsPlainText
$indexServerName = "serverName"

# Run the following commands on the remote computer
$credential = New-Object System.Management.Automation.PSCredential($userName, $passWord)
$session = New-PSSession $indexServerName -Authentication CredSSP -Credential $credential
Invoke-Command -Session $session -scriptBlock { `
Add-PSSnapin Microsoft.SharePoint.PowerShell; `
`
$indexServiceAppName = "Search Service Index Application"; `
`
$indexServiceApp = Get-SPServiceApplication -Name $indexServiceAppName; `
$contentSource = Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $indexServiceApp `
$contentSource.StartFullCrawl() `
}

How It Works

The above script uses PowerShell remoting to issue requests on a SharePoint indexing server.

The following variables need to filled in:

  • $userName: The full username of an account with permissions to kick off a new search.
  • $passWord: The account's password. Note that dollar signs need to be escaped with tick characters in PowerShell strings (e.g. "Pa`$`$word").
  • $indexServerName: The name of a server running the index role.

An example usage is to run this script as part of a SQL job or SSIS step. The executable to call is "PowerShell.exe" with the above script saved in a "PS1″ file as the command's argument.

Because SharePoint 2010 and FAST Search for SharePoint use the same service application architecture, this approach works for either system.

  • Changed type GuYuming Wednesday, February 29, 2012 4:42 AM

Details Of SharePoint 2010 Application & server Monitoring Part -3

 

 Logging Database

Microsoft has always made it pretty clear how it feels about anyone touching the SharePoint databases. The answer is always a very clear and concise, "Stop it!" Microsoft didn't support reading from, writing to, or even looking crossly at SharePoint databases. Period. End of story. That became a problem, however, because not all of the information that administrators wanted about their farm or servers was discoverable in the interface, or with the SharePoint object model. This resulted in rogue administrators, with the curtains pulled, quietly querying their databases, hoping to never get caught.

SharePoint 2010 addresses this by introducing a logging database. This database is a farm-wide repository of SharePoint events from every machine in your farm. It aggregates information from many different locations, and writes them all to a single database. This database contains just about everything you could ever want to know about your farm, and that's not even the best part. The best part is that it is completely supported for you to read from and write to this database, if you would like, because the schema is public.

The following list includes some of the information that is logged by default:

  • Search Queries
  • Timer Jobs
  • Feature Usage
  • Content Import Usage
  • Server Farm Health Data
  • SQL blocked queries
  • Site Inventory
  • Search Query statistics
  • Page Requests
  • Site Inventory Usage
  • Rating Usage
  • Content Export Usage
  • NT Events
  • SQL high CPU/IO queries
  • Search Crawl
  • Query click-through

Microsoft had well-intentioned reasons for forbidding access to databases before. Obviously, writing to a SharePoint database potentially puts it in a state where SharePoint can no longer read it and render the content in it. Everyone agrees that this is bad.

What is less obvious, though, is that reading from a database can have the same impact. A seemingly innocent, but poorly written SQL query that only reads values could put a lock on a table or the whole database. This lock would also mean that SharePoint could not render out the content of that database for the duration of the lock. That's also a bad thing.

However, because this logging database is simply just a copy of information gathered from other places, and it is not used to satisfy end-user requests, it's safe for you to read from it or write to it. If you destroy the database completely, you can just delete it and let SharePoint re-create it. The freedom is invigorating.

Let's take a look at some details behind this change of heart.

Configuring the Logging Database

How do you use this database and leverage all this information? By default, health data collection is enabled. This builds the logging database. To view the settings, open SharePoint Central Administration and go into the now-familiar Monitoring section. Under the Reporting heading, click Configure usage and health data collection to display the page shown in Figure 17.

Let's start by looking at the settings at the top. The first checkbox on the page determines whether the usage data is collected and stored in the logging database. This is turned on by default, and here is where you would disable it, should you choose to.

The next section enables you to determine which events you want reported in the log. By default, all eight events are logged. If you want to reduce the impact that logging has on your servers, you can disable events for which you don't think you'll want reports. You always have the option to enable events later. You may want to do this if you find yourself wanting to investigate a specific issue. You can turn the logging on during your investigation, and then shut it off after the investigation is finished.

The next section determines where the usage logs will be stored. By default, they are stored in the LOGS directory of the SharePoint root, along with the trace logs. The usage logs follow the same naming convention as the trace logs, but have the suffix .usage. As with the trace logs, it's a good idea to move these logs off of the C:\ drive if possible. You also have the capability to limit the amount of space occupied by the usage logs, with 5 GB being the default.

The next section, Health Data Collection, seems simple enough — just a checkbox and a link. The checkbox determines whether SharePoint will periodically collect health information about the members of the farm. The link takes you to a list of timer jobs that collect that information. When you click the Health Logging Schedule link, you're taken to a page that lists all of the timer jobs that collect this information. You can use this page to disable the timer jobs for any information you don't want to collect. Again, the more logging you do, the greater the impact on performance.

The amount of information SharePoint collects about itself is quite vast. Not only does it monitor SharePoint-related performance (such as the User Profile Service Application Synchronization Job), it also keeps track of the health of non-SharePoint processes (such as SQL Server). It reports SQL blocking queries and Dynamic Management Views (DMV) data. Not only can you disable the timer jobs for information that you don't want to collect, but you can also decrease how frequently they run, to reduce the impact on your servers.



Figure 17. Configuring the logging database

 

The next section of the Configure web analytics and health data collection page is the Log Collection Schedule. Here you can configure how frequently the logs are collected from the servers in the farm, and how frequently they are processed and written to the logging database. This lets you control the impact the log collection has on your servers. The default setting collects the logs every 30 minutes, but you can increase that to reduce the load placed on the servers.

The final section of the page displays the SQL instance and database name of the reporting database itself. The default settings use the same SQL instance as the default Content Database SQL instance, and use the database name WSS_Logging. The page says that it is recommended that you use the default settings. However, there are some pretty good reasons to change its location and settings.

Considering the amount of information that can be written to this database, and how frequently that data can be written, it might make sense to move this database to its own SQL Server instance. Though reading from and writing to the database won't directly impact end-user performance, the amount of usage this database could see might overwhelm SQL Server, or fill up the drives that also contain your other SharePoint databases. If your organization chooses to use the logging database, keep an eye on the disk space that it uses, and the amount of I/O activity it generates. On a test environment with about one month's worth of use by one user, the logging database grew to more than 1 GB. This database can get huge.

If you must alter those settings, you can do so in Windows PowerShell with the Set-SPUsageApplication cmdlet. The following PowerShell code demonstrates how to change the location of the logging database.

Set-SPUsageApplication -DatabaseServer <Database server name>

-DatabaseName <Database name> [-DatabaseUsername <User name>]

[-DatabasePassword <Password>] [-Verbose]

Specify the name of the SQL Server instance where you would like to host the logging database. You must also specify the database name, even if you want to use the default name, WSS_Logging. If the user running the Set-SPUsageApplication cmdlet is not the owner of the database, provide the username and password of an account that has sufficient permissions. Because this database consists of data aggregated from other locations, you can move it without losing any data. It will simply be repopulated as the collection jobs run.

To get the full list of PowerShell cmdlets that deal with the Usage service, use the following command.

get-command -noun spusage*

Consuming the Logging Database

Thus far, you've read a lot about this logging database, what's in it, and how to configure it. But you haven't learned how you can enjoy its handiwork. There are many places to consume the information in the logging database.

The first place to look is Central Administration. Click Monitoring and then select Reporting; there are three reports that use information in the logging database. The first is a link that says View administrative reports. Clicking that link takes you to a document library in Central Administration that contains a few canned administrative reports. Out-of-the-box, there are only search reports, but any type of reports could be put here. Microsoft could provide these reports, or they can be created by SharePoint administrators.

The documents in this library are simply web pages, so you can click any of them to see the information reported in them. These particular reports are very handy for determining the source of search bottlenecks. This enables you to be proactive in scaling out your search infrastructure. You can see how long discrete parts of the search take, and then scale out your infrastructure before end users are affected.

The next reports in Central Administration are the health reports. These reports enable you to isolate the slowest pages in your web application, and the most active users per web application. Like the search reports, these reports enable you to be proactive and diagnose issues in your farm. Running these reports enable you to see details about the pages that take the longest time to render, and then take steps to improve their performance. Figure 18 shows part of the report. To view a report, click the Go button on the top of the page.



Figure 18. Slow Page report

 

The report shows how long each page takes to load, including minimums, maximums, and averages. This provides a very convenient way to find trouble pages. You can also see how many database queries the page makes. This is helpful, because database queries are expensive operations that can slow down a page render. You can drill down to a specific server or web application with this report as well, because the logging database aggregates information from all the servers in the farm.

You can also pick the scope of the report you want, and click the Go button. The reports are generated at run-time, so it might take a few seconds for them to appear. After the results appear, you can click a column heading to sort by those values.

Web Analytics reports in Central Administration are also fed from the logging database. These reports provide usage information about each of the farm's web applications, excluding Central Administration. Click the View Web Analytics reports link to view a summary page that lists the web applications in the farm, along with some high-level metrics like total number of page views and total number of daily unique visitors.

When you click a web application on the Summary page, you see a Summary page that provides more detailed usage information about that web application. This includes additional metrics for the web application, such as referrers, total number of page views, and the trends for each, as shown in Figure 19.



Figure 19. Web Analytics report

 

The web application Summary report also adds new links on the left. These links enable you to drill further down into each category. Each new report has a graph at the top, with more detailed information at the bottom of the page.

To change the scope of a report, click Analyze in the ribbon. This then shows the options that you have for the report, including the date ranges included. You can choose one of the date ranges provided, or choose custom dates. This provides the flexibility to drill down to the exact date you want. You can also export the report out to a comma-separated value (CSV) file by clicking the Export to Spreadsheet button. Because this is a CSV file, the graph is not included, only the dates and their values. These options are available for any of the reports after you choose a web application.

As previously mentioned, the Web Analytics reports do not include Central Administration. Although it is unlikely that you will need such a report, it is available to you. The Central Administration site is simply a highly specialized site collection in its own web application. Because it is a site collection, usage reports are also available for it. To view them, click Site Actions, and then select Site Settings. Under Site Actions, click Site Web Analytics reports. This brings up the same usage reports that you just saw at the web application level. You also have the same options from the ribbon, with the exception of being able to export to a CSV file.

Because these reports are site-collection Web Analytics reports, they are available in all site collections, and not in Central Administration. This is another way to consume the information in the logging database. To view the usage information for any site collection or web application, open Site Actions and select Site Settings to get the Web Analytics links. You have two similar links: Site Web Analytics reports and Site Collection Web Analytics reports. These are the same sets of reports, but at different scopes. The site-collection level reports are for the entire site collection. The site-level reports provide the same information, but at the site (also called web) level. You have the option to scope the reports at that particular site, or that site and its subsites.

Another option that was not available in the Central Administration Web Analytics reports is the capability to use workflows to schedule alerts or reports. You can use this functionality to have specific reports sent to people at specific intervals, or when specific values are met. This is another way that you can use the logging database and the information it collects to be proactive with a SharePoint farm.

There is one final way to consume the information stored in the logging database, directly from SQL. Although it might feel like you're doing something wrong, you're not. Microsoft said that it is okay. You have several ways to access data in SQL Server databases, but let's take a look at how to do it in SQL Server Management Studio with regular SQL queries.

SQL Server Management Studio enables you to run queries against databases. Normally, it is a very bad thing to touch any of the SharePoint databases, but the logging database is the only exception to that rule. To run queries against the logging database, you open Management Studio and locate the WSS_Logging database.

The database has a large number of tables. Each category of information has 32 tables to partition the data. It is obvious this database was designed to accommodate a lot of growth. Because of the database partitions, it is tough to do SELECT statements against them. Fortunately, the database also includes views that you can use to view the data.

Expand the Views node of the database to see which views are defined for you. In Figure 20, you can see how to get the information from the Usage tables. Right-click the view and click Select Top 1000 Rows.

This figure shows both the query that is used, and the results of that query. You can use this view and the resulting query as a template for any queries you want to design. If you do happen to damage the logging database, you can simply delete it, and SharePoint will re-create it.



Figure 20. Usage request query from logging database

 

Health Analyzer

By now, you've seen that you have a lot of ways to keep an eye on SharePoint. What if there was some way for SharePoint to watch over itself? What if it could use all that fancy monitoring to see when something bad was going to happen to it, and just fix it itself?

Welcome to the future. SharePoint 2010 introduces a feature called the Health Analyzer that does just that. The Health Analyzer utilizes timer jobs to run rules periodically, and to check on system metrics that are based on SharePoint best practices. When a rule fails, SharePoint can alert an administrator in Central Administration, or, in some cases, just fix the problem itself. To access all this in Central Administration, you click Monitoring and then select Health Analyzer.

Reviewing Problems

How do you know when the Health Analyzer has detected a problem? When you open up Central Administration and there's a red or yellow bar running across the top, as shown in Figure 21, that's the Health Analyzer alerting you that there's a problem in the farm. To review the problem, click View these issues on the right side of the notification bar.



Figure 21. Health Analyzer warning

When you click the link, SharePoint 2010 displays the Review problems and solutions page. (If there are no problems, you can also click Monitoring and then select Review problems and solutions in Central Administration to access the page.) This page shows you all the problems that the Health Analyzer found in the farm. Figure 22 shows some problems common with a single-server farm after installation.



Figure 22. Problems with a SharePoint farm

Clicking any of the issues displays the definition of the violated rule and possible remedies for it. Figure 23 shows details about one of the problems.



Figure 23. Problem details

As you can see toward the top of Figure 23, SharePoint provides a summary of the rule. This particular error indicates that one of the application pool accounts is also a local administrator. In most situations, this is a security issue, so SharePoint discourages it. SharePoint categorizes this as having a severity level of 2, being a Warning. It also tells you that this problem is in the Security category.

The next section, Explanation, describes what the problem is and to which application pools and services it pertains. The following section, Remedy, points you to the Central Administration page where you can fix the problem, and provides an external link to a page with more information about this rule. This is a great addition, and gives SharePoint the capability to update the information dynamically.

The next two sections indicate which server is affected by the issue, and which service logged the failure. The final section provides a link to view the settings for this rule. You learn more about the rule definitions later in this chapter.

That's a rather in-depth property page, and it's packed with even more features. Across the top is a small ribbon that gives you some management options.

Starting on the left, the first button is Edit Item. This lets you alter the values shown on the property page. You could use this to change the error level or category of the rule. It isn't recommended that you alter these values, but if you do, you can keep track of the versions with the next button to the right, Version History. The next button, Alert Me, enables you to set an alert if the item changes. You have these options because these rules are simply items in a list, so you have many of the same options you have with regular list items.

There is another button that deserves mention. For each rule, you have the option to Reanalyze Now. This lets you fire off any rule without waiting for its scheduled appearance, which is great for ensuring that a problem is fixed once you have addressed it. You won't have to wait for the next time the rule runs to verify that it has been taken care of.

Some problems are not only reported, but can be fixed in the property page as well. Figure 22 shows another problem that appears under the Configuration category. It notes that one or more of the trace log categories were configured with Verbose trace logging. This configuration issue can contribute to unnecessary disk I/O and drive space usage. The Health Analyzer alerts you when this value is set. This problem is fairly easy to fix. Simply set the trace logging level back to its default. For problems like this, SharePoint offers another option, Repair Automatically, shown at the top of Figure 24.



Figure 24. Repair Automatically button

Click the Repair Automatically button if you want SharePoint 2010 to fix the problem. Then, click the Reanalyze Now button, click Close on the property page, and then reload the problem report page. The trace logging problem should no longer be listed. This is almost bliss for the lazy SharePoint administrator.

Rule Definitions

The real power of the Health Analyzer lies in its impressive set of rules. SharePoint 2010 includes 60 rules. To see the entire list and details about each rule, click Monitoring, select Health Analyzer, and then choose Review rule definitions under Health Analyzer.

The rules are broken down by category: Security, Performance, Configuration, and Availability. The default view shows several pieces of information about each rule, including the Title, the Schedule of how often it runs, whether it's Enabled to run, and whether it will Repair Automatically. Wait, did you just read "Repair Automatically"? You read that right. Some rules can be configured to automatically repair the problems they find.

One example of a rule that automatically fixes itself is Databases used by SharePoint have fragmented indices. Once a day, SharePoint checks the indices of its databases, and if their fragmentation exceeds a hard-coded threshold, SharePoint automatically defrags the indices. If the indices are not heavily fragmented, it does nothing. This is a great use of Repair Automatically. It's an easy task to automate, and there's no reason it should need to be done manually by an administrator.

Some rules, like Drives are running out of free space, don't seem like quite as good a candidate for SharePoint to fix by itself. You don't want it deleting all those copies of your resume or your Grandma's secret chocolate-chip cookie recipe.

If you want to change the settings of any of the rules (including whether or not it repairs automatically), you simply click the rule title, or click the rule's line and select Edit Item in the ribbon. Here, you can enable or disable whether or not a rule runs. In a single-server environment, it might make sense to disable the rule that reports databases on the SharePoint server. It's nothing that can be fixed, so getting alerts about it does you no good. You could also choose to change how often the rule is run, but it is not a best practice to change the details of a rule except to enable the rule and whether or not you want the rule to automatically correct any problems.

Finally, the rules are simply items in a list. This illustrates how the rules list is extensible. More rules can be added later by Microsoft, or by third parties.

Timer Jobs

Timer jobs are one of the great unsung heroes of SharePoint. They have been around for several versions of SharePoint, and they get better with age.

Timer jobs are the workhorses of SharePoint. At the most basic level, timer jobs are tasks defined in XML files in the configuration database. Those XML files are pushed out to the members of the farm, and are executed by the Windows service, SharePoint 2010 Timer. Most configuration changes are pushed out to the farm members with timer jobs. Recurring tasks like Incoming E-Mail also leverage timer jobs.

In SharePoint 2010, timer jobs get another round of improvements. A lot of the functionality covered in this chapter relies on timer jobs, so you have seen some of those improvements already. This section drills down a little deeper into how timer jobs have improved.

Timer Job Management

When you enter Central Administration, it is not immediately obvious that timer jobs have received such a shiny new coat of paint. They have links to essentially the same two items in SharePoint 2010 that they do in SharePoint 2007, job status and job definition. In SharePoint 2010, the timer job links are under the Monitoring section, because there is no longer an Operations tab. Figure 25 shows their new home.



Figure 25. Timer Job Definitions

The timer job definition page is largely unchanged from its SharePoint 2007 counterpart. You get a list of the timer jobs, the web application they will run on, and their schedule. You can also change the jobs that are shown by filtering the list with the View drop-down in the upper right-hand corner.

To really see what's new, click one of the timer job definitions. Hopefully you're sitting down, because otherwise the new timer definition page shown in Figure 26 might knock you over. It includes all of the same information provided in SharePoint 2007, including the general information on the job definitions screen, and the buttons to disable the timer job. However, there are two new, very exciting features.

First, you can change the timer job schedule in this window. In SharePoint 2007, you needed to use code to do this. This provides a lot of flexibility to move timer jobs around if your farm load requires it. That's a great feature, but it's not the best addition.

The best addition to this page (and arguably to timer jobs in SharePoint 2010) is the button on the bottom of the page, Run Now. You now have the capability to run almost any timer job at will. This means no more waiting for the timer job's scheduled interval to elapse before knowing if something you fixed is working. It is also how Health Monitoring (discussed earlier in this chapter) can fix issues and re-analyze problems. You are no longer bound by the chains of timer job schedules. You are free to run timer jobs whenever you want. That alone is worth the cost of admission.



Figure 26. Edit Timer Job page

Timer Job Status

The other link related to timer jobs that you have in Central Administration is Check job status. This serves the same purpose as its SharePoint 2007 counterpart. However, like the timer job definitions, it has received a new coat of paint. Figure 27 shows the new Timer Job Status page. Like the SharePoint 2007 version, it shows you the timer jobs that have completed, when they ran, and whether they were successful.

SharePoint 2010 takes it a step further. The Succeeded status is now a hyperlink. If a timer job fails or succeeds, you can click this link to the status and get more information. You also have the capability to filter and view only the failed jobs. That helps with troubleshooting, because you can see all the failures on one page, without all those pesky successes getting in the way. To take it a step further, you can click on a failure and get information about why that particular timer job failed.

The Timer Job Status page serves as a dashboard. You've already seen how it shows the timer job history, but it also shows the timer jobs that are scheduled to run, as well as the timer jobs that are currently running. If you want more complete information on any of these sections, you can click the appropriate link on the left under Timer Links. This provides a page dedicated to each section.

Figure 27. Timer Job Status page

Along with showing the timer jobs that are running, you can also see the progress of how far along each job is, complete with a progress bar. If you have many jobs running at once, you can click Running Jobs in the left Navigation pane to access a page dedicated to reporting the timer jobs that are currently running.

Here's one final timer job improvement: SharePoint 2010 introduces the capability to assign a preferred server for the timer jobs running against a specific Content Database. Figure 28 shows how it is configured in Central Administration.



Figure 28. Configuring Preferred Timer Job Server

This setting is set per Content Database, so it is set on the Manage Content Database Settings page (that is, in Central Administration, click Application Management and then select Manage Content Databases). Being able to set a particular server to run the database's timer jobs serves two purposes.

From a troubleshooting standpoint, you can use this to isolate failures to a single box, if you're having trouble with a specific timer job or Content Database. You can also use this to move the burden of timer jobs to a specific server. This server could be one that is not used to service end-user requests, so having it be responsible for timer jobs will allow another scaling option.

Although you can do a lot to manage timer jobs in Central Administration, you can't forget about Windows PowerShell. SharePoint includes five cmdlets that deal with timer jobs. To discover them, use the following Get-Command cmdlet.

PS C:\> Get-Command -noun SPTimerJob

You can use PowerShell to list all of your timer jobs using Get-SPTimerJob, and then choose to run one with Start-SPTimerJob.

Summary

This chapter picked up where your installation experience left off. You have a SharePoint farm that is installed and running perfectly. With the tools in this chapter, you have learned how to keep an eye on that farm to ensure that it continues to run well. If there is trouble in your farm, you are now armed with the tools to hunt it down and fix it. You will know to check the Health Analyzer and see if it has found any problems with your farm. If there's nothing there, you will also know how to use the ULS logs to track down that error. After finishing this chapter you are a lean, mean, SharePoint monitoring machine

  • Changed type Pengyu Zhao Wednesday, February 29, 2012 2:43 AM not q

Complete Details of Monitoring SharePoint 2010 Application Part -2

 

Figure 9. Correlation ID in action

 In this example, you know that the user was trying to view a Word document with the Office Web Applications and it failed. Once you get the correlation ID, b7162a24-1fa2-4567-80a5-74feda9a768b, 20100801171552, you can figure out why the document would not open.

Because each entry in the trace logs has a correlation ID, you can just open up the trace log in Notepad and look for the lines that reference this conversation. Figure 10 shows what you would find in this example.

Figure 10. Determine the problem by using the correlation ID

 

By following the correlation ID through the trace log, you might stumble across a pretty telling error: "There are no instances of the Word Viewing Service started on any server in this farm. Ensure that at least one instance is started on an application server in the farm using the Services on Server page in Central Administration."

That does sound like a problem. Sure enough, by checking in Central Administration, you see that no servers in the farm are running the Word Viewing service instance. By following that correlation ID through the logs, you can learn all kinds of fun stuff about how SharePoint works. For example, SharePoint looks to see if there is a cached copy of that document before it tries to render it.

As you have seen, the correlation ID is exposed when an error page is displayed, and throughout the trace logs. It is also referenced in events in the Windows Event Log, when it is appropriate. You can also use a correlation ID when doing a SQL trace on SharePoint's SQL traffic. The correlation ID is considered by many administrators to be one of the best new features in SharePoint 2010.

The Developer Dashboard

You aren't always handed the correlation ID all tied up with a bow as you were in Figure 9. Sometimes the page renders, but there are problems with individual web parts. In those cases, you can use the Developer Dashboard to get the correlation ID and track the problem down.

Despite what the name suggests, this dashboard is not just for developers. The Developer Dashboard is a dashboard that shows how long it took for a page to load, and which components loaded with it, as shown in Figure 11.

Figure 11. Developer Dashboard  

This dashboard is loaded at the bottom of your requested web page. As you can see, the dashboard is chock full of information about the page load. You can see how long the page took to load (790.11 ms), as well as who requested it, its correlation ID, and so on. This can be used when the help desk gets those ever popular "SharePoint is slow" calls from users. Now you can quantify exactly what "slow" means, as well as see what led up to the page load being slow.

If web parts were poorly designed and did a lot of database queries, you'd see it here. If they fetched large amounts of SharePoint content, you'd see it here. If you're really curious, you can click the link on the bottom left, Show or hide additional tracing information, to get several pages worth of information about every step that was taken to render that page.

Now that you're sold on the Developer Dashboard, how do you actually use it? As previously mentioned, it is exposed as a dashboard at the bottom of the page when it renders. The user browsing the page must have the AddAndCustomizePages permission (site collection admins and users in the Owner group, by default) to see the Developer Dashboard, and it must be enabled in your farm.

By default, it is shut off, which is one of the three possible states. It can also be on, which means the dashboard is displayed on every page load. Not only is that tedious when you're using SharePoint, but it also has a performance penalty.

Figure 12. Enabling the Developer Dashboard when it's on demand

How do you go about enabling the Developer Dashboard to make this possible? You can use Windows PowerShell, as shown in the following example.

$dash = [Microsoft.SharePoint.Administration.SPWebService]

::ContentService.DeveloperDashboardSettings;

$dash.DisplayLevel = 'OnDemand';

# $dash.DisplayLevel = 'Off';

# $dash.DisplayLevel = 'On';

$dash.TraceEnabled = $true;

$dash.Update()

The DisplayLevel can be one of three values: Off, On, or OnDemand. The default value is Off.

Notice that at no point do you specify a URL when you're setting this. It is a farm-wide setting. Never fear, though, users with AddAndCustomizePages permission will see it, so hopefully it won't scare too many users if you must enable it for troubleshooting. If you have enabled MySites, each user is the site collection owner of their site collection, so they will see it.

Methods for Consuming the Trace LogsSo far, you've learned what the trace logs are, and a little bit about how to configure them and their contents. In this section, you learn about some ways to mine them and get out the information that you need.

Using Excel 2010

Not only are those beloved trace logs text files, they are tab-delimited text files. This means that Excel 2010 can import them easily and put each column of information into its own column. Once trace logs are in an Excel 2010 spreadsheet, you can use Excel's sort and filtering to locate the events of interest. You can also resize the columns or hide them completely for readability. You can even paste several log files into one spreadsheet to look for trends of errors.

Whereas Notepad gets sluggish with large files, Excel handles them with ease.

Using MSDN ULS Viewer

Though it's frustrating that SharePoint does not come with a log viewer, Microsoft has redeemed itself a bit. It released a free, dedicated (though unsupported) ULS Viewer. Because this utility was built from the ground up to read SharePoint's ULS, it does it quite well.

It allows real-time monitoring of the ULS logs, and will do smart highlighting (where it highlights all events that have the same value of the field you are hovering over). For example, if you hover over the category Taxonomy, it will automatically highlight all categories that match.

It also offers extensive filtering that includes filtering out event levels like Verbose or Medium. You can also filter by any value in any column. Right-clicking any correlation ID allows you to set a highlight for any matching row, or simply only show the rows that match. Figure 13 shows how to filter the logs based on a single correlation ID.

Figure 13: MSDN ULS Viewer  

The interface has a lot of options, and is laid out very well. Because it's a free tool, it's worth every penny. If you're not comfortable installing it on your production servers, you can install it on your workstation and copy the ULS files over when trouble occurs.

Using SPLogEvent and SPLogFile

The last method to discuss is using the PowerShell cmdlets that deal with consuming the trace logs. The first, Get-SPLogEvent, retrieves log events from the trace logs. As with the other cmdlets you have learned about, using Get-Help with the -Examples parameter provides a good foundation to learn the different ways you can use this cmdlet. Let's take a look at a few examples.

If you just run Get-SPLogEvent with no parameters, it will spit back every record from every trace log it can find on your local machine. Hopefully, you are sitting in a comfortable chair if you do that, because it's going to take a while. Fortunately, you have many ways to limit the number of results you get, making it easier for you to separate the wheat from the chaff.

First, you can use the PowerShell cmdlet Select to pare down the results. The following examples demonstrate getting the first and last events in the logs.

Get-SPLogEvent | Select -First 5

Get-SPLogEvent | Select -Last 5

Get-SPLogEvent | Select -First 20 -Last 10

Depending on how many trace logs you have, it could take a while for the last results to show up. It's still walking through the whole list of events; it's just not displaying them all.

A better way is to use Get-SPLogEvent's -StartTime parameter to limit the events it reads. The following command returns the last ten events in the last five minutes.

Get-SPLogEvent -StartTime (get-date).addminutes(-5) | Select -Last 10

This will return results much more quickly, and likely will give you better results. You can also specify an end time, if you want to narrow down your search. You can also specify which levels to return. The following line returns all of the high-level events in the last minute.

Get-SPLogEvent -MinimumLevel "high" -StartTime (get-date).addminutes(-1)

In most cases, when you use Get-SPLogEvent, it is to get all of the events for a particular correlation ID. This is as easy as piping Get-SPLogEvent through a Where-Object clause and filtering for a specific correlation ID. The following command returns all of the events in the last ten minutes with a blank correlation ID.

Get-SPLogEvent -StartTime (Get-Date).addminutes(-10) |

Where-Object {$_.correlation

-eq "00000000-0000-0000-0000-000000000000"}

If you want a real correlation ID to work with, you can get one quickly with the following command.

Get-SPLogEvent -StartTime (Get-Date).addminutes(-1) | select correlation -First 1

You might have to run it a couple times to get a correlation ID that is not all zeros.

Figure 14 shows how this looks with its output. Once you have it, you can paste it into the previous statement to get all of the events that pertain to that correlation ID.

Figure 14. Getting a random correlation ID  

You have some other cmdlets at your disposal for pruning through those trace logs. A good one to use when troubleshooting is New-SPLogFile.

This tells SharePoint to close out the current log file and create a new one. You saw earlier that, by default, SharePoint rolls its logs over every 30 minutes. If you've ever loaded up those logs and tried to look for a specific event, you know it can be quite daunting.

With New-SPLogFile, you can run it before and after an event you are troubleshooting. For example, if the User Profile service instance won't start on a particular server, you could use New-SPLogFile to create a new log file right before reproducing the problem. Then, after you've tried to start the service, you can create another new log file. This will isolate into one file all the events created during your attempt, making it easier to follow.

If you have multiple servers in your farm, browsing through trace logs can be daunting, because you must constantly collect them from all of your servers. If only there was a way to merge the logs from all of your servers into one file…

Well, there is! SharePoint 2010 comes with a cmdlet, Merge-SPLogFile, that does exactly that. Merge-SPLogFile merges the trace logs from all of the SharePoint servers in your farm into one, easy-to-consume (or, at least, easier-to-consume) log file. All the tools that you used previously to work with trace files work with the merged log file as well.

By default, Merge-SPLogFile only merges events from the last hour from each machine. Using the same -StartTime and -EndTime parameters that you can use with Get-SPLogEvent, you can customize that window. If the error you are chasing happened in the last ten minutes, you can make it shorter. If you want to archive all the events from your servers from the last three hours, you can make it longer. Figure 15 shows Merge-SPLogFile in action.

Figure 15. Merging log files  

You can see from Figure 15 that all Merge-SPLogFile needs to run is a path to write the newly created log file to. When you run it, it creates a timer job on all of the machines in the farm. This timer job collects the logs requested, and then copies them over to the server where Merge-SPLogFile is running. That's why you are warned that it may take a long time.

Although Merge-SPLogFile is happy to run with no parameters, you do have the option of trimming down the results, should you choose to. Get-Help Merge-SPLogFiles provides a list of parameters that you can use, including (but not limited to) Area, EventID, Level, Message, and Correlation ID. Figure 16 shows how you can use the last one, the correlation ID, to get a single log file that tracks one correlation ID from across your farm. This can be very handy when chasing down a problem.



Figure 16. Merging events with a common correlation ID

ecause Get-SPLogEvent supports a -directory switch, you can point it at a location other than your standard LOGS directory when searching for events. This can be used to speed up your searches if you copy the applicable logs to a different directory and point Get-SPLogEvent there. You can also point it to the directory where you save a merged log file created by Merge-SPLogFile and use it to filter those results as well.

As previously mentioned, Merge-SPLogFile is good for troubleshooting, but it is also very handy for archiving log events.

Windows Event Logs

In addition to the trace logs, another part of the ULS is Windows Events. These are the events that you are used to viewing in the Windows Event Viewer. While SharePoint 2010 writes to its own trace logs, it writes events here as well.

You can configure how much information SharePoint writes to the Windows Event Logs in the same way that you can control how much it writes to the trace logs. In Central Administration, click Monitoring and then select Configure diagnostic logging to set the threshold of events that are written to the Windows Event Logs, just like you can with the trace log.

You have several levels of events to choose from, including Reset to default, which resets the logging level back to its default. For Windows Events, you have an additional setting, event log throttling. If you enable event log throttling, SharePoint does repeatedly write the same event to the Windows logs if there is a problem. Instead, it only writes events periodically, telling you that the event is still being throttled. This keeps your Windows Event Logs from being overrun by the same message.

In Central Administration, you can only enable or disable this feature. In Windows PowerShell, using Set-SPDiagnosticConfig, you can enable or disable throttling, as well as change some of the settings. Table 3-2 shows a list of these settings, a description of what they do, and their default values.

Table 2. Settings for Set-DiagnosticConfig

Setting

Description

Units

Default Value

Threshold

Number of events allowed in a given time period (TriggerPeriod) before flood protection is enabled for this event

Integer; value must be between 0 (disabled) and 100 (maximum)

5

TriggerPeriod

The timeframe in which the threshold must be exceeded in order to trigger flood protection

Minutes

2

QuietPeriod

The amount of time that must pass without an event before flood protection is disabled for an event

Minutes

2

NotifyPeriod

The interval in which SharePoint will write an event notifying you that flood protection is still enabled for a particular event

Minutes

5

Earlier in this chapter, Figure 3-6 showed the Event Log Flood Protection settings as they are displayed with Get-SPDiagnosticConfig. These settings can be changed with the complementary cmdlet, Set-SPDiagnosticConfig

  • Changed type Pengyu Zhao Wednesday, February 29, 2012 2:43 AM not q

Complete Details of Monitoring SharePoint 2010 Application part -1

 

Introduction

Getting Microsoft SharePoint 2010 up and running is only half the battle. Keeping it up and running is another thing entirely. Once you have SharePoint 2010 installed and configured, and you have end users telling you how great it is (and you are), it's easy to get comfortable and just admire your handiwork. Don't be lulled into a false sense of security. Forces are working against you and your poor, innocent SharePoint farm.

It's your job to keep these problems at bay and keep SharePoint spinning like a top. Using the tools that you read about in this chapter, you can see what SharePoint is doing behind the scenes, and see ways to predict trouble before it happens. After you're finished with this chapter, you'll almost look forward to experiencing problems with SharePoint so that you can put these tools to good use and get to the bottom of the issues.

Unified Logging System (ULS)

The Unified Logging Service (ULS) is the service that is responsible for keeping an eye on SharePoint and reporting what it finds. It can report events to three different locations:

  • SharePoint trace logs
  • Windows Event Log
  • SharePoint logging database

Where the event is logged (and if it's logged at all) depends on the type of event, as well as how SharePoint is configured. The ULS is a passive service, which means that it only watches SharePoint and reports on it; it never acts on what it sees.

Let's take a look at each of the three locations and see how they differ.

Trace Logs

The trace logs are the logs you think of first when discussing SharePoint logs. They are plain old text files that are tab delimited and open with any tool that can open text files. You learn about methods for consuming them later in this chapter.

By default, trace logs are located in the LOGS directory of the SharePoint root (also called the 14 Hive) at C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14.

Figure 1 shows how the directory looks. Later in this chapter, you learn how to move these files to a better location.



Figure 1. SharePoint trace logs

 

The naming format for the trace log files is machinename-YYYYMMDD-HHMM.log, in 24-hour time. By default, a new log file is created every 30 minutes. You can change the interval by using Windows PowerShell with the Set-SPDiagosticConfig command. The following code snippet configures SharePoint to create a new trace log every 60 minutes.

Set-SPDiagnosticConfig Set-SPDiagnosticConfig -LogCutInterval 60

For more information about using PowerShell with SharePoint 2010, read Chapter 5.

Trace logs existed in previous versions of SharePoint, but they have undergone some improvements in SharePoint 2010. For starters, they take up less space, but still provide better information. It's a classic "eat all you want and still lose weight" situation.

The trace logs are smaller than their SharePoint 2007 counterparts for a couple of reasons. First, a lot of thought has gone into what gets written to the trace logs by default. Anyone who has perused through a SharePoint 2007 ULS log or two has seen a lot of repetitive and mostly unhelpful messages. These messages add to the bloat of the file, but do not provide much in return. In SharePoint 2010, many of these messages have been removed from the default settings, which makes the log files smaller.

Also, SharePoint now leverages Windows NT File System (NTFS) file compression for the LOGS folder, which also decreases the amount of space the logs occupy on disk.

Figure 2 shows the compression on a trace log file.

Figure 2. Trace log compression

 

The log file shown in Figure 2 is 9.62 MB, but it is only taking 3.30 MB on disk, thanks to NTFS compression. This allows you to keep more logs, or logs with more information, without as much impact on the drive space of your SharePoint servers.

Finally, in SharePoint 2010 you have much better control over which events are written to the trace logs, and better control over getting things put back after you have customized them. In SharePoint 2007, you had some control over which events were written to the trace logs, but there were two significant drawbacks:

  • You didn't know which events in the interface to more heavily monitor.
  • Once you cranked up one area, there was no way to set it back to its original setting after you had successfully solved a problem.

With SharePoint 2010, there's now good news, because both of those issues have been addressed. You have a very robust event throttling section in Central Administration that enables you to customize your logs to whatever area your issue is in, and then dial it back easily once the problem is solved.

In Central Administration, click Monitoring on the left, and then select Configure Diagnostic Logging under the Reporting section to see a window similar to Figure 3.



Figure 3. Event Throttling

 

Two things should jump out at you. The first is the sheer number of options you have to choose from. The Enterprise SKU of SharePoint Server has 20 different categories, each with subcategories. This means that if you are troubleshooting an error that only has to do with accessing External Data with Excel Services, you can crank up the reporting only on that without adding a lot of other unhelpful events to the logs. The checkbox interface also means that you can change the settings of multiple categories or subcategories at one time. So, the interface makes it easy to change a single area, or a large area.

The second thing that should jump out at you in Figure 3-3 is that one of those options, Secure Store Service, is bolded. In SharePoint 2007, after you had customized an event's logging level, there was no way to go back to see which levels you had changed. And, if, by some strange twist of fate, you were able to remember which events you had changed, there was no way to know what level to change them back to.

In most cases, one of two things happened. You either left the events alone (in their chatty state), or you found another SharePoint 2007 installation and went through the settings, one by one, to compare them. Neither solution was great.

Fortunately, SharePoint 2010 addresses both of those issues swimmingly. As you can see in Figure 3, the Secure Store Service is bolded in the list of categories. That's SharePoint 2010's way of saying, "Hey! Look over here!" Any category that is not at its default logging level will appear in bold, making it very easy to discover which ones you need to change back. That's only half the battle though.

How do you know what to set it back to? SharePoint 2010 covers that, too. As shown in Figure 4, in the list of Least critical events to report to the trace log drop-down list box, there is a shining star at the top, Reset to default. This little number sets whichever categories are selected back to their default logging settings. This means that you can crank up the logging as much as you want, knowing that you can easily put it back once you are finished. Microsoft has even provided an All Categories checkbox at the top of the category list (see Figure 3) to make it even easier to fix in one fell swoop.



Figure 4. Reset to default setting

 

Setting the event levels that are logged to the trace logs is just one of the settings that you can customize. Probably the most important change you can make to your trace logs is their location. As previously mentioned, the default location for these files is C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\LOGS. That location is fine, because the logs get to hang out with the rest of their SharePoint 2010 friends. But space on the C:\ drive is valuable, and if you can safely move something to another drive, you should.

Fortunately, it is very easy to move these files to a new location, where they cannot do any harm to your C:\ drive. To do this, open Central Administration click Monitoring, and then select Configure Diagnostic Logging. Figure 5 shows the bottom of this page, where you can enter a new location to store your trace logs.

Figure 5. Moving trace logs

 

Change the default location to a location on another drive. An excellent choice is something like D:\Logs\SharePoint or E:\Logs\SharePoint, depending on where your server's hard drives are located.

Keep in mind that this is a farm-level setting, not server level, so every server in your farm must have that location available. If you try to set a location that is not available on all of the servers in the farm, you'll get an error message. You'll also need to keep this in mind when adding new servers to your farm. Your new server must have this location as well.

Figure 5 shows a couple of other settings you can use to control the footprint your trace logs have on your drives.

The first option allows you to configure the maximum number of days the trace logs are kept. The default is 14 days. This is a good middle-of-the road setting. Resist any temptation to shorten this time period unless your servers are really starving for drive space. If you ever need to troubleshoot a problem, the more historical data you have, the better off you are. The only downside to keeping lots of information is the amount of time and effort it takes to go through it. You learn more about this later in this chapter.

You can also assign a finite size to the amount of drive space your trace logs can consume, whether they have reached 14-day expiration or not. The default value is 1 TB, so be sure to change that value if you want to restrict the size in a more meaningful way.

Configuring Log Settings with PowerShell

Every SharePoint 2010 administrator should get cozy with Windows PowerShell. It is the best way to do repetitive tasks, and manipulating the trace logs is no exception. So far in this chapter, you have learned about using Central Administration to interact with trace logs. In this section, you learn how to use PowerShell to make those same configuration changes.

SPDiagnosticConfigThe first tool in the PowerShell arsenal is the Get-SPDiagnosticConfig cmdlet and its twin brother, Set-SPDiagnosticConfig. The former is used to retrieve the diagnostic settings in your farm; the latter is used to change them. Figure 6 shows the output of Get-SPDiagnosticConfig, and it reflects changing the log cut interval to 60 minutes, as you did previously.



Figure 6. Get-SPDiagnosticConfig output

 Seeing the settings that Get-SPDiagnosticConfig displays gives an idea of what values its brother Set-SPDiagnosticConfig can set. Using PowerShell's built-in Get-Help cmdlet is also a good way to get ideas on how best to leverage it, especially with the -Examples switch, similar to the following.Get-Help set-SPDiagnosticConfig -Examples

This shows a couple of different methods of using Set-SPDiagnosticConfig to change the diagnostic values of your farm. The first method uses the command directly to alter values. The second method assigns a variable to an object that contains each property and its value. You can alter the value of one or more properties, then write the variable back with Set-SPDiagnosticConfig. Either way works fine; it's a matter of personal opinion as to which way you go.

Earlier, you learned that it's a good idea to move the location of the ULS trace logs. By default, they are located in the Logs directory of the SharePoint root. While they are fine there, space on the C:\ drive is almost holy ground. If the C:\ drive gets full, then Windows gets unhappy, and everyone (including IIS and SharePoint) is unhappy. To help prevent that, you can move your trace logs to another location, freeing up that precious space. The following PowerShell command moves the log location to E:\logs.

Set-SPDiagnosticConfig -LogLocation e:\Logs

It is important to note that this only changes the location that new log files are written to. It will not move existing log files. You must move them yourself.

SPLogLevel

You learned previously that you have some flexibility in configuring how different aspects of SharePoint log events. You saw how to look at these settings and change them in Central Administration. You can also use PowerShell to get and set that same information with the SPLogLevel cmdlets.

You can get a list of the cmdlets that deal with the log levels by running the command Get-Command -Noun SPLogLevel in a SharePoint Management Shell. The results should look similar to Figure 7.



Figure 7. SPLogLevel cmdlets

 

Let's start an examination of the available options by taking a look at Get-SPLogLevel, which reports the current logging level in the farm. With no parameters, it will list through every category and subcategory, and report their trace log and event log levels. Using Get-Member, you can see that Get-SPLogLevel will report information that is not available in Central Administration, like what the default trace and event log logging levels are.

The SPLogLevel objects have a property named Area that corresponds to the top-level categories in Central Administration. Running the PowerShell command Get-SPLogLevel | select -Unique area displays those categories. To get all of the settings from a particular area takes a little work.

The -Identity parameter of Get-SPLogLevel corresponds to the second column (or Name column) of the log levels, which maps to the subcategories in Central Administration. This means that you cannot use Access Services for the Identity parameter, but you could use Administration, which is the first subcategory under Access Services. To get all of the logging levels for Access Services, use a command similar to the following.

Get-SPLogLevel | Where-Object {$_.area.tostring() -eq "Access Services"}

This uses the Area property of the log level, converts it to a string, and then displays the log level objects that match Access Services, as shown in Figure 8.

Figure 8. Access Services event categories

 Now that you've mastered Get-SPLogLevel, let's look at Set-SPLogLevel, its complementary cmdlet. You can use this one to set a specific log level to the trace or event logs for a category or group of categories.

Suppose that you are having trouble with the Office Web Applications and you want as much debugging information as you can get. Of course, you could go into Central Administration and check the box next to Office Web Apps, but that's no fun. Let's use PowerShell instead.

The following command uses PowerShell to get all of the SPLogLevel objects that are in the Office Web Applications category, then pipes them through Set-SPLogLevel to set their trace logging level to verbose.

Get-SPLogLevel | Where-Object {$_.area.tostring() -eq "Office Web Apps"} | Set-SPLogLevel -TraceSeverity verbose

In one fell swoop, you have set all of the logging levels you need. Now you can reproduce the error, then go through your trace logs, and discover what the problem is. Once you have conquered that Office Web Applications problem, you must return the logging levels back to normal. That's where the third SPLogLevel cmdlet, Clear-SPLogLevel, comes into play.

Much like in Central Administration, there is an easy way to reset all of your logging levels back to the default. The Clear-SPLogLevel cmdlet clears out any changes you have made, and sets the logging levels to their default values for both trace and event logging. If you run it with no parameters, it resets all of the logging levels to their defaults. Like Get-SPLogLevel and Set-SPLogLevel, you can also pass it optional parameters to reset specific categories or areas.

Using Logs to TroubleshootHaving lots and lots of beautiful logs files does you no good unless you can crack them open and use them to hunt down problems. In this section, you learn about some things to look for in the trace logs, and a variety of ways to look at them.

Introducing the Correlation IDThe first time that anyone opens up a SharePoint trace log, he or she feels the same sense of helplessness and being overwhelmed. It's like being dropped into an unfamiliar city in the middle of rush hour; so many things are going on at once all around you, and none of it looks familiar.

Fortunately, SharePoint has provided a bit of a road map for you, the correlation ID. The correlation ID is a globally unique identifier GUID that is assigned to each conversation a user or process has with SharePoint. When an error occurs, an administrator can use the correlation ID to track down all of the events in the trace logs that pertain to the conversation where the problem occurred.

This is very helpful in those very, very rare occasions when end users contact the help desk because something is broken and, when asked what they were doing, they reply, "Nothing." Now, with correlation IDs, you can track those conversations with SharePoint and see exactly what was happening when the error occurred. Figure 9 shows a window that an end user might get with the correlation ID in it.


Your account has been used to send a large amount of spam messages during the recent week

Dear friends , i am facing this fromlem in my exchange 2010 for some users.

exchange sends a mail to users and says that you have sent spam mails out

i am giving the reply mail also.

To: Ganapati N.S
Subject: Returned mail: see transcript for details

Dear user ganapati.ns@domainname.com,

Your account has been used to send a large amount of spam messages during the recent week.
Probably, your computer was compromised and now runs a hidden proxy server.

Please follow the instructions in order to keep your computer safe.

Virtually yours,
The domainname.com team.


Reply:

First go to www.mxtoolbox.com and check to see if you domain is blacklisted.  Put your domain in the lookup box and click lookup.  then click the BlackList link to see if you are listed.  Then get yourself de-listed.
Second use your firewall to limit outbound emails to your mailservers only.  If you don't know how to do this, post your firewall information here (make, model and code version if possible) and we'll see what we can do.
Third, turn on verbose logging on your receive connectors and then check to see where most of the connections are comming from.

Hope that helps.


JAUCG


------------------------------------
Reply:
its not listed under any black list......

Thanks Ajay Singh MCITP Exchange IBM Tivoli, HP DPS,


------------------------------------
Reply:

 Do you recongnise the  who the email is from, I suspect that, that is spam itelf.

Have you got AV/AS spam deployed on Exch? Running and up to date?

AV deploye on your desktops?

Check the message header of the message and see where it's come from.


Sukh


------------------------------------

Scheduling assistant not working in Outlook 2010 / unable to see free/busy

Running Outlook 2010 / Exchange 2010. When users try to use the scheduling assistant they are unable to see others free busy info. They can see others free busy info by adding users calendar as a shared calendar. Autodiscover is working, when I run the Test Email Auto confiuration from Outlook it passes. When I run test-outlookwebservices from Exchange all tests pass, there are no errors. This happens whether Outlook is running in cached or non-cached mode. Please help.

Reply:

Hi

Does it work in OWA?

Steve


------------------------------------
Reply:

------------------------------------
Reply:
It DOES work in OWA.

------------------------------------
Reply:

Try & check this

http://technet.microsoft.com/en-us/library/ee424432.aspx

http://technet.microsoft.com/en-us/library/ee633469.aspx


There are a lot of settings in those links, which ones are relevent to my problem? I'm having a hard time understanding how they would affect the scheduling assistant.

------------------------------------
Reply:

As it is working in OWA there does not appear to be server side issues.

Is the error only reported on few users or many and all users ?

also try repairing the client side outlook issues

http://www.exchangedictionary.com/index.php/Articles/exchange-2010-calendar-repair-assistant.html .. server side.

check the services are running on mailbox server , configure logging,

for client side

repair the pst or

http://www.outlookpst-repair.com/pstrepair/repair-recover-outlook-calendar.php


------------------------------------
Reply:
This happens to all users who have an Outlook profile on our terminal server. Local clients do not have this problem. Can you clarify what type of logging to enable?

------------------------------------
Reply:
Bump

------------------------------------
Reply:
Anyone have ideas as to why this happens?

------------------------------------

New Technet wiki about BizTalk Server: Performance Tuning & Optimization

Hi All,

There is a new Technet Wiki article "BizTalk Server: Performance Tuning & Optimization" , Please refer in case you are looking for articles and ways about BizTalk server performance tuning and optimization.

HTH,Thanks, Naushad (MCC/MCTS) http://alamnaushad.wordpress.com,My New Technet Wiki Article "BizTalk Server: Performance Tuning & Optimization" Please "Vote As Helpful" if this was useful while resolving your question!

Sending issue after upgarading to outlook 2010

Hello,

I have an issue with some users who used to send to "all users" group. The problem started after upgrading to outlook 2010

All of the users have the permission of "send as".But they still getting the error "You don't have permission to send to" whenever they trying to send to all users group on behalf of another main mail address.
please note that the main mail address have the permission to send to the "all users" group.

Any suggestions how to sort out this issue.

Thank you


  • Edited by Member51 Monday, February 13, 2012 5:18 AM

Reply:

Tried via OWA and by creating a new Outlook profile?

tested in online mode too?


Sukh


------------------------------------

Windows Live Mail doesn't work

Windows Live Mail freezes, disallowing navigation between messages and folders, and will not allow either opening or deleting of new messages.  We have uninstalled and re-installed Windows Live Essentials twice to no avail.  Hot Mail works fine as does Movie Maker.  It's only Live Mail that's a problem.  Any ideas?

Reply:
This forum is related to the Microsoft Application Virtualization product and not Windows Live Essentials. Please re-ask your question at the Windows Live Solution Center.


Twitter: @stealthpuppy | Blog: stealthpuppy.com

This forum post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Please remember to click "Mark as Answer" or "Vote as Helpful" on the post that answers your question (or click "Unmark as Answer" if a marked post does not actually answer your question). This can be beneficial to other community members reading the thread.


------------------------------------
Reply:
Windows Live Mail freezes, disallowing navigation between messages and folders, and will not allow either opening or deleting of new messages.  We have uninstalled and re-installed Windows Live Essentials twice to no avail.  Hot Mail works fine as does Movie Maker.  It's only Live Mail that's a problem.  Any ideas?

------------------------------------

Mailbox Cleanup

Hi All,

I have exported unused mailbox's (2400), which means not used more than 120 Days.

Is there any script for delete mailbox's using display name?

Please help me.

Regards


Ganga


Reply:

There isn't a script available which will do this, you have to create one.

Depending on what version of Exch you're using will help you use the right scripting language.


Sukh


------------------------------------

AD permissions messed up

I have a serious problem with my domain at work. I work in a school, we used to have one windows 2003 server as a DC.
Meanwhile I installed another server with 2008 r2 platform and set it as additional DC. Then I reinstalled the first one so, now both of them have 2008 r2 operating system.
Active directory with users and policies was created few years ago and worked fine. There were basically 3 types of users:

- student (user with minimal rights)
- teacher, and other staff (SuperUser)
- administrator (domain admin)

Until few days everything worked fine, as only administrator was able to use Remote Desktop or access for example server's c$ or d$ drive.
Now somehow it's all messed up, and I don't recall doing any changes in AD or GP.

So symptoms are these:
- Students, teachers and all other users are able to connect via remote desktop to any machine including server.
- All of them are able to access \\server\c$ or similar folders by DEFAULT (this did not change on other workstations, only servers)

So my questions are these:
Does anyone know this kind of behaviour from experience to give me fast solution?
If not, where exactly in active directory group policy I can reset those options:
- forbid using of remote desktop for all user except Administrator
- forbid browsing of any folders by any users unless it's specifically shared to that user

Another thing:
From a XP computers lately I've been getting message that I can't run Remote Administrator, no matter if I'm logged as administrator or other user
Does it have something to do with the fact I've raised functionality level of domain to 2008 r2? Message displayed is:
"Remote computer requires network level authentication, which your computer does not support."

Thanks in advance


Reply:

Hi,

First: Sounds like your users must be local administrator (at least), logon with one of your student account and run whoami /groups, run RSoP.

Second: Run RSoP and let we see the results.


Mohammad Javad Bagdeli


------------------------------------
Reply:

Hi,

First, there is only one student account named 'student' for all students to log.
Second, right now I'm at home and I'm connected to server via TeamViewer vpn. I've tried to log to one of XP machines as student (from server that I've connected via VPN) but in this case it seems restriction works fine, so I couldn't log in due to restrictions. However, when I try to log to server as student from here (using static IP i got from TeamViewer) I succeed, however I couldn't run cmd due to restrictions. But the problem is that I shouldn't be able to do it at all with student account.
Can I somehow get this data logged as administrator?
What exactly should I do after i run RSop?


  • Edited by djryback Monday, February 27, 2012 9:01 PM

------------------------------------
Reply:
I cannot try this on Windows 7 machine, since they are all turned off at the moment, but today I was normally able to log using teacher's account via Remote Desktop.
Can it be somehow only windows7 and server 2008 machines are affected to this? (only regarding remote desktop, cause from XP i was able to browse c$ folder on server, logged as student)

------------------------------------
Reply:

I've managed to log on server using teacher's account (also shouldn't be able to do it) via TeamViewer VPN and i run whoami/groups
This is the output:


GROUP INFORMATION
-----------------

Group Name                                 Type             SID          Attributes                                        
========================================== ================ ============ ==================================================
Everyone                                   Well-known group S-1-1-0      Mandatory group, Enabled by default, Enabled group
BUILTIN\Administrators                     Alias            S-1-5-32-544 Group used for deny only                          
BUILTIN\Users                              Alias            S-1-5-32-545 Mandatory group, Enabled by default, Enabled group
BUILTIN\Pre-Windows 2000 Compatible Access Alias            S-1-5-32-554 Group used for deny only                          
NT AUTHORITY\REMOTE INTERACTIVE LOGON      Well-known group S-1-5-14     Mandatory group, Enabled by default, Enabled group
NT AUTHORITY\INTERACTIVE                   Well-known group S-1-5-4      Mandatory group, Enabled by default, Enabled group
NT AUTHORITY\Authenticated Users           Well-known group S-1-5-11     Mandatory group, Enabled by default, Enabled group
NT AUTHORITY\This Organization             Well-known group S-1-5-15     Mandatory group, Enabled by default, Enabled group
LOCAL                                      Well-known group S-1-2-0      Mandatory group, Enabled by default, Enabled group
Mandatory Label\Medium Mandatory Level     Label            S-1-16-8192  Mandatory group, Enabled by default, Enabled group

Interesting thing as I restricted access to C drive of server to user 'profesor' I still could output it to C$ as you can see below:

C:\Users\profesor>whoami /groups >c:\whoami.txt
Access is denied.

C:\Users\profesor>whoami /groups >\\server\c$\whoami.txt

C:\Users\profesor>


------------------------------------
Reply:

When i run RSoP it asked me about running mmc.exe. I accepted and it opened Resultant set of policy window. What I do now?

Note: Reason I logged as 'profesor' (teacher's account) is cmd is not disabled for this user


------------------------------------
Reply:

Hi,

First: Sounds like your users must be local administrator (at least), logon with one of your student account and run whoami /groups, run RSoP.

Second: Run RSoP and let we see the results.


Mohammad Javad Bagdeli


Are you reffering to student or teacher's accounts? No, they are not local administrators, at least they were not, and they shouldn't be. For specific needs of teachers I have created local administrator on their computers.

------------------------------------

Post-Transition Administrator Task: Active Sync Policy Creation

Hi all, as many of you may already know, any BPOS Messaging Active Sync policies that may have been created will not transitioned into Exchange Online 365, as called out in the post-transition Admin Checklist (#8):  http://www.microsoft.com/online/help/en-us/helphowto/8939e90a-59dc-4f0f-aec0-19a899c0af75.htm#BKMK_AfterTransition

So for any BPOS Admins who have an ActiveSync Policy in place, requiring device passwords, encryption, etc you will want to review the following link, which will walk you through how to create an ActiveSync Policy within Exchange Online 365, specifically the Exchange Control Panel (ECP) in order to have your mobile devices managed when using Exchange Online 365: 

http://go.microsoft.com/fwlink/?LinkId=212255

Manage Exchange ActiveSync for Your Organization

Microsoft Exchange ActiveSync lets users synchronize their mobile phones with their Microsoft Exchange mailboxes. You can manage what devices users can use to synchronize with Exchange, and manage how those devices synchronize to control long-distance and data charges.

Exchange ActiveSync is offered on many mobile devices and can provide different levels of security for your organization's data and experiences for your users. As an administrator, in the Exchange Control Panel, you can specify which devices your users can use to synchronize, and how you want your organization's data to be safeguarded on your user's devices.

  • Manage Access for Mobile Devices   Device access rules control which devices your users can use to synchronize with Exchange. This functionality may involve ongoing maintenance and may thus require making an action plan prior to instituting access rules.
  • Manage Data Protection on Mobile Devices   Exchange ActiveSync device policies provide you a way to require that your users use a PIN on their mobile devices and have the e-mails and Exchange data encrypted on the devices. They also let you decide how to handle devices that don't support your organization policies.
  • Troubleshoot Exchange ActiveSync Synchronization Problems   Troubleshoot common synchronization problems that users might experience.

HTH


Transitions Community Lead ...Ryan J. Phillips

Microsoft sql server driver throws out of memory and lots of TDSPacket classes loaded

I am trying to fetch records from a table with statement

select * from table order by stringcolumn

 which has 12 million records using Microsoft sql server driver 

Db Product :Microsoft SQL Server

Driver vendor :Microsoft SQL Server JDBC Driver 2.0

Driver Version : 2.0.1803.100

Getting out of memory error and heap dump shows lots of TDSPacket classes. Same code , database works for INET sql server driver. So there is something i am missing with Microsoft driver . Any idea?

Appreciate any help.

Thanks


SS


Reply:

Hello,

Please, could you tell what is the full version and full edition of your SQL Server ?

Have you an idea of the mean length of a row ? ( 12 000 000 rows with a mean length of 1000  bytes ==> a recordset of 12 GB maybe too much to be returned in only one time and you have an OutOfMemoryException)

Please, could you explain what you mean when you are writing "Same code , database works for INET sql server driver". If you were thinking about an ASP or Windows Form application loading a DatagridView ( or similar control ), the applications are often using a return of data by pages or by a given number of rows ( to avoid too big data transfers in comparaison with the available memory ) ?

We are waiting for your feedback to try to help you more efficiently.

Have a nice day


Mark Post as helpful if it provides any help.Otherwise,leave it as it is.


------------------------------------
Reply:

Thank you very much for your reply. Snce my application throws our of memory I wrote a test JDBC class to see how sql server driver behaves differently than INET driver. I refer "Same code and database" by using the same test code to connect to same database. I understand a resultset of 12 million is a lot. But i am trying to find areas for problem/imrovement using Microsoft sql server driver. I pasted my sample code below. When i execure using INET driver , it comes back within second. When i use Microsoft driver it takes 10 mins.

I added most info, please let me know if anything else needed. I really like to understand what is different in Microsoft driver.

My debug statement prints

I ran profiler and see 3039166 reads and 16 write.

 May be TDSPacket class is loaded because of read.

I see lots of below statement in profiler.

RPC:Completed exec sp_reset_connection

Audit Login -- network protocol: LPC

 Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64)
 Apr  2 2010 15:48:46
 Copyright (c) Microsoft Corporation
 Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)

When i run exec sp_spaceused 'dbo.table' i get

 name       rows         reserved     data         index_size     unused   
 ---------  -----------  -----------  -----------  -------------  --------- 
 table      12023184     32254808 KB  14239400 KB  17999880 KB    15528 KB


SELECT OBJECT_NAME (id) tablename
     , COUNT (1)        nr_columns
     , SUM (length)     maxrowlength
FROM   syscolumns
GROUP BY OBJECT_NAME (id)
ORDER BY OBJECT_NAME (id);

Gives

table 204            4700           

My samplae code looks like

import java.sql.*;

import java.lang.*;

import java.util.*;

public class TestSql {

public static void main(String args[]) {

 

String database = "jdbc:sqlserver://<host>:1433;databaseName=<database>;integratedSecurity=false";

ResultSet rs = null;

Statement st = null;

java.sql.Driver driver = null;

try {

// Register Driver

try

{

if (database.indexOf("inetdae") != -1)

{

driver = (Driver)Class.forName("com.inet.tds.TdsDriver").newInstance();

}

else if (database.indexOf("sqlserver") != -1)

{

driver = (Driver)Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver").newInstance();

}

} catch (Exception e) {

System.err.println(e);

e.printStackTrace();

return;

}

Connection conn = DriverManager.getConnection(database, "user", "password");

System.out.println("Db Product :"+conn.getMetaData().getDatabaseProductName());

System.out.println("Driver vendor :"+conn.getMetaData().getDriverName());

System.out.println("Driver Version : "+conn.getMetaData().getDriverVersion());

conn.setAutoCommit(false);

String sqlStmt = "select * from table order by description";

st = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);

st.setFetchSize(1);

long t1 = 0;

long t2 = 0;

long delta;

t1 = System.currentTimeMillis();

rs = st.executeQuery(sqlStmt);

t2 = System.currentTimeMillis();

delta = t2 - t1;

System.out.println("Execute took :"+delta/1000+"("+delta+") secs.");

t1 = System.currentTimeMillis();

int i = 1;

while (rs.next())

{

String wonum = rs.getString(1);

System.out.println("col 1 :"+wonum);

if (i++ > 2)

{

System.out.println("break;");

break;

}

}

t2 = System.currentTimeMillis();

delta = t2 - t1;

st.cancel();

st.close();

rs.close();

System.out.println("Fetching 3 records took :"+delta/1000+"("+delta+") secs.");

conn.commit();

System.out.println(" completed :"+i);

}

catch (SQLException ex)

{

try

{

st.cancel();

st.close();

rs.close();

}

catch (Exception e)

{

e.printStackTrace();

}

System.out.println("\n*** SQLException caught ***\n");

while (ex != null) {

System.out.println("SQLState: " + ex.getSQLState());

System.out.println("Message: " + ex.getMessage());

System.out.println("Vendor: " + ex.getErrorCode());

ex.printStackTrace();

ex = ex.getNextException();

System.out.println("");

}

}

catch (java.lang.Exception ex) {

ex.printStackTrace();

}

}

}//end class


SS


Execute took :664(664984) secs.

Fetching 3 records took :0(47) secs.

  • Edited by SS12 Monday, February 27, 2012 8:27 PM

------------------------------------

Blue Screen Windows 7

 Tgis is the problem I am having, and I have no idea how to fix this the following is the error code upon start up.

Problem signature:

Problem Event Name: BlueScreen

OS Version: 6.1.7600.2.0.0.256.1

Locale ID: 4105

Additional information about the problem:

BCCode: 34

BCP1: 00050830

BCP2: 8C903854

BCP3: 8C903430

BCP4: 829067A0

OS Version: 6_1_7600

Service Pack: 0_0

Product: 256_1

Files that help describe the problem:

C:\Windows\Minidump\021312-22230-01.dmp

C:\Users\Sapphire22\AppData\Local\Temp\WER-37315-0.sysdata.xml


Reply:

Hi,

Bug Check 0x34: CACHE_MANAGER. This indicates that a problem occurred in the file system's cache manager.

One possible cause of this bug check is depletion of nonpaged pool memory. If the nonpaged pool memory is completely depleted, this error can stop the system. However, during the indexing process, if the amount of available nonpaged pool memory is very low, another kernel-mode driver requiring nonpaged pool memory can also trigger this error.

To resolve a nonpaged pool memory depletion problem: Add new physical memory to the computer. This will increase the quantity of nonpaged pool memory available to the kernel.

Please refer:
http://msdn.microsoft.com/en-us/library/ff557491(v=vs.85).aspx


William Tan

TechNet Community Support


------------------------------------
Reply:

------------------------------------

IE9 Download Prompt

IE9 Download Prompt

I've ran across an issue with IE9 that is affecting file downloads. Some clients get a prompt like the first image below and after clicking any option nothing happens. Other clients get a prompt like the 2nd image below and everything works correctly. From my research the 2nd image reflects the new download manager stuff in IE9 but I can't find a setting that control the behavior between image 1 and image 2. The images reflect the same download from the same url. Any assistance will be greatly appreciated.

image 1

image 2


Reply:
I've came across something that seems to be part of the issue. A software package we use changes the iexplore.exe value form 1 to 0 in the FEATURE_MIME_HANDLING key each time the web-based app is opened. I found some info about this key at http://technet.microsoft.com/en-us/library/cc749557%28v=ws.10%29.aspx. I am unsure why this setting would break file downloads though.

------------------------------------
Reply:
 

Hi,

The download manager in IE9 should be displayed like image 2. It's the new feature in IE9 called download manager tool.

I suggest to perform these tests on the problem PC which show the download prompt like image one:

1 Logon with Safe Mode with networking

2 Reset IE9 advanced settings

3 Reinstall IE9:

http://windows.microsoft.com/en-US/windows7/how-do-i-install-or-uninstall-internet-explorer-9

If the issue persists, I suggest to contact IE forum for further help:

http://social.technet.microsoft.com/Forums/en-US/ieitprocurrentver/threads

The reason why we recommend posting appropriately is you will get the most qualified pool of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us.  Thank you for your understanding.

Regards,

Leo   Huang

TechNet Subscriber Support

If you are TechNet Subscription user and have any feedback on our support quality, please send your feedback here.


Leo Huang

TechNet Community Support


------------------------------------

Add metadata property to search refinement panel?

Hi,

I have created a site column and then added that site column as metadata property in Central admin.

After that i made changes in xml of refinement panel which as below

<Category Title="Category" Description="category of item" Type="Microsoft.Office.Server.Search.WebControls.ManagedPropertyFilterGenerator" MetadataThreshold="5" NumberOfFiltersToDisplay="1" MaxNumberOfFilters="4" SortBy="Frequency" SortDirection="Descending" SortByForMoreFilters="Name" SortDirectionForMoreFilters="Ascending" ShowMoreLink="True" MappedProperty="CategoryMap" MoreLinkText="show more" LessLinkText="show fewer"> <CustomFilters MappingType="ValueMapping" DataType="String" ValueReference="Absolute" ShowAllInMore="False">  <CustomFilter CustomValue="A">   <OriginalValue>B</OriginalValue></CustomFilter> </CustomFilters></Category>

when i add a new category to xml of refinement panel and start full crawl even then metadata property is not visible on refinement panel

thanks,

Gaurav

 


Reply:

"When the refiner is not showing in the Search Center there are a few things to check.

  1. Make sure to fully crawl the content after creating the managed property and confirm the crawled property contains
    values.
  2. Make sure there is enough data which uses in this case the language. In the above XML the value of the
    MetadataThreshold attribute is set to 3. This means the number of results that must contain a value to display the
    filter generator under the filter category is set to 3.
  3. Uncheck Use Default Configuration in the webpart properties of the Refinement Panel, section Refinement.
  4. In the webpart properties of the Refinement Panel in the section Refinement a value is displayed for Number of
    Categories to Display. If the number of categories exceeds this number and the new category is defined last in the XML,
    it won't show up.  "

reference:

http://www.itidea.nl/index.php/search-refiners-part-1-expanding-the-ootb-search-refinement-panel/


MCTS,MCPD Sharepoint 2010. My Blog- http://sharepoint-journey.com


If a post answers your question, please click "Mark As Answer" on that post and "Vote as Helpful



------------------------------------

Known issues and Workaround about Launchpad "Computer Monitoring Error" are published.

Known issues and Workaround about Launchpad "Computer Monitoring Error" are published.

http://social.technet.microsoft.com/wiki/contents/articles/7852.computer-monitoring-error-on-dashboard-or-launchpad.aspx

This wiki described the two new known issues that trigger the "Comunter Monitoring Error".


This post is "AS IS" and confers no rights. Ning Kuang[MSFT] Windows HSBS Program Manager


Reply:

Hi,

Thanks for your time and notification. I hope this known issue will be got fixed soon.

Regards,
James


James Xiong

TechNet Community Support


------------------------------------

How to convert a sharepoint list item to record.

When a sharepoint list item is converted to Record then it cannot be deleted.

Is there a way that we can change a sharepoint List item to Record through code.


kukdai


Reply:

first declare this assembly

Imports Microsoft.Office.RecordsManagement.RecordsRepository

then use this in code, where item is SPListItem

Records.DeclareItemAsRecord(item)

you can also check if the item is already a record

If Records.IsRecord(item)

there is a sample here as well, in the 101 code samples project

http://code.msdn.microsoft.com/SharePoint-2010-101-Code-da251182


------------------------------------

Gmail settings for Microsoft Outlook 2010

Follow following link to view gmail settings for Outlook 2010

http://support.google.com/mail/bin/answer.py?hl=en&answer=77689


Please remember to vote if the info helped you :: Please post back so everyone can benefit from the solution


Reply:

Hi,

Thanks for sharing knowledge here.

Best regards,


Rex Zhang

TechNet Community Support


------------------------------------

HP Laserjet 1020 not runs when wizards windows setup finish installation

Video profit: http://www.youtube.com/watch?v=Ur0oh1cCVeY&feature=youtu.be

When you installs the driver the printer runs, but when windows wizard autoinstall finished to install the printer its dont run anymore, you have to do all again for make it working and only for one time...

TODO:
Remove all folders and files in:
C:\windows\system32\spool\printers\
C:\windows\system32\spool\drivers\x64\
C:\windows\system32\spool\drivers\W32X86\
Install HP 64bits drivers from HP and without compatibilty activated
Connect Printer's USB, and fast print

Voilá....

This is a bug related in Windows XP, i didnt know what's the reason to duplicate on windows 8, then wizard's manager finished install the printer its impossible to print again anything and have to do all steps.

HP Drivers are compatible, but windows 8 reinstalls it again and make a loop for that drivers makes it unrecognized. (thats the theory, but i watch in vista forums the cause is the spooler service... ?)

Thank you for watching that video :)


Asus ep121 Browsers

Both Metro and desktop browsers are flakey. Text overwrites text.

Can't scroll in Metro browser.

Did not have a complaint when running Windows7 on same machine.


Thomas F. Divine http://www.pcausa.com

Developed a Rapid Migration Guide ( Exchange 2003 to Exchange 2010 )

Developed a Rapid Migration Guide ( Exchange 2003 to Exchange 2010 )

Hope I brought all things in one place

http://gallery.technet.microsoft.com/Rapid-Migration-Guide-from-7ade7012

Please Let me know , if you need any changes or Updates needed in the Doc


Satheshwaran Manoharan | Exchange 2003/2007/2010 | Blog:http://www.careexchange.in | Please mark it as an answer if it really helps you

Examples of Third Party Applications Using EWS

What third party applications have you developed using EWS?

 

Thanks


  • Edited by Imran Azad Tuesday, November 15, 2011 6:14 PM

Reply:
BlackBerry Enterprise Server for calendaring.

Sukh


------------------------------------

3 1/2 hrs



Well, full of anticipation, I'm now down to 3 1/2 hrs (& counting) til the ISO finishes & I can swap the DP for the CP... oh, down to 3 hrs, already, lol.  Patience is a virtue, builds character or something...   Microsoft's servers are sure earning their treats today!

Drew MS Partner / MS Beta Tester / Pres. Computer Issues Pres. Computer Issues www.drewsci.com


  • Edited by Drew1903 Wednesday, February 29, 2012 7:24 PM

Reply:
LOL, you can say that again Drew.  Hopefully you're full of CP goodness by the end of the day.  <G>

--Joseph [MSFT] http://blogs.technet.com/b/joscon/


------------------------------------
Reply:

The fun of discovery is well worth the wait, Drew.  ;)

So far I've gotten 2 CP ISO files (32 and 64 bit), a VMware update, and a few other miscellaneous things.

I wonder how many people will consider upgrading to fiber optic internet service after today... 

 

-Noel


Detailed how-to in my new eBook: Configure The Windows 7 "To Work" Options


------------------------------------
Reply:
Well, always takes less time than initially shown... burning DVD, now :D <-- Smile.  The DL actually took maybe around 2 hrs.  & the good news is, since I threw the Win7 x86 I had out the window yesterday, Win8 can go where that was & I won't be running Win8 as a vm, yea!!  Will just dual-boot Win8 & my Win 7 x64.  I only took the x64 of Win8.

The fun will be discovering this compared to the DP... anxious to see how much difference there is.

Cheers,
Drew

Drew MS Partner / MS Beta Tester / Pres. Computer Issues Pres. Computer Issues www.drewsci.com


------------------------------------
Reply:

Well, I've a few observations already...

  • You now get some Metro color choices, and there are a few more options during install.
  • There's no Start button at all.
  • Something's wrong with the CSS for the integrated Help, but that may just be Microsoft's servers being overloaded.
  • 5 Windows Updates are already available.

It's a blast checking out a new system!

-Noel


Detailed how-to in my new eBook: Configure The Windows 7 "To Work" Options



  • Edited by Noel Carboni Wednesday, February 29, 2012 8:28 PM

------------------------------------
Reply:

Allow me to preface this w/ WOW!

Installed in about 15 mins. & takes up 14.4Gig; couple mins. more to put in personal info, personalize things an initial bit & a few secs. (for it) to find Devices (camera, printer, etc.).  Not only does it install quickly, it does everything quickly... impossible not to notice how snappy it is, both internally & surfing.  And I thought 7 was fast...

Cool differences & greater sophistication compared to DP are becoming immediately obvious 1 after another...  Yes, Noel, will be lots more "fun" than the DP was.  Anyway, time for me muck about, do a few 'me' things & explore & discover & decided what I may say to Connect going forward.

And, glad not a VM.

Cheers,
Drew


Drew MS Partner / MS Beta Tester / Pres. Computer Issues Pres. Computer Issues www.drewsci.com



  • Edited by Drew1903 Thursday, March 1, 2012 3:07 AM

------------------------------------

Changing Boot

Didn't know where the heck to place this so, leaned more on the Miscellaneous than, the Windows 7

FYI:
This is due to folks in another tech forum asking about changing things in a duel-boot scenario.  An OS was installed then, another, hence the dual-boot... we'll call them 1 & 2.  NOW, they want OS2 to exist and boot w/out OS1
The following is a way to do it:

I had a dual-boot, Windows 7 x86 & x64.  x86 was the 1st OS installed & held the Boot.

Using bcdboot C:\Windows /s C:the boot was moved to the x64's parition.  This is via cmd on the (now) 'preferred' machine, where the Boot is to be.
 
Drive holding the x86 was disconnected, to confirm the x64 would, now, boot w/out the x86 in the picture. (optional)

x86 partition deleted & formatted.

In msconfig all boot paths, except the x64, deleted.

Can & may use the, now, free partition for Windows 8; prefer, not virtual. (but, that's an aside to this discussion)

Anyway, the above is how it's done.  Quite easy & simple , really.
 

Drew MS Partner / MS Beta Tester / Pres. Computer Issues Pres. Computer Issues www.drewsci.com






  • Edited by Drew1903 Wednesday, February 29, 2012 2:32 AM

Reply:

Hi,

Thanks for sharing your experience about dual-boot system. These steps are very easy to understand.

At this time, I would like to share some links about multi-boot also:

http://windows.microsoft.com/en-us/windows7/Install-more-than-one-operating-system-multiboot

http://support.microsoft.com/kb/919529

Best Regards,

Kim Zhou


Please remember to click "Mark as Answer" on the post that helps you, and to click "Unmark as Answer" if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.


------------------------------------
Reply:

I have used two tools for multiboot PC in labs.

1. Grub is from open software environment. There is a learning path, but I could resolve some heterogeneous configurations.

2. In Windows environment I adopted small program EasyBCD, that hides the details behing bcdboot. I could have configuration i "one minute": http://neosmart.net/EasyBCD/

Regards

Milos


------------------------------------

Invalid Object Name ?

Good Morning!

select * from Studentdetails

I am getting this Red curly Line on the table name. 

Th table exists in the database, i can perform CRUD on it, but it still gives me the red curly line. 

The only thing that would (need not) be surprising is I imported it from Excel table into the database. 

Why does it think of it as an error? 

Edit: 

It does't invoke Intellisense, which is so vital man. 


Reply:

Is IntelliSense working for other objects? After importing the table from Excel did you refresh the intellisense local cache by selecting Edit -> IntelliSense -> Refresh Local Cache from SSMS (or Ctrl + Shift + R)?

- Krishnakumar S


------------------------------------
Reply:

Hey Thanks Mate! 

Resolved within 25 mins... :-)


------------------------------------

InfoPath 2010 Multiple Cascading Multiselect List Boxes with user add

Having recently mastered this, and forgotten the magic, and had to relearn it, I've decided to not merely lurk in this fine forum, but actually start a discussion that I hope will aid somebody else.

All the fine examples I have found on various blogs involving cascading list boxes use the typical City, State, Country example, and involve a single list with multple columns that relate to each other.

I needed a way to use multiple lists, and here is the magic key:

The first box is a dropdown list that has 3 values. It is used basically to devide the second dropdown by 3 thus reducing the number of choices. The second cascades in turn to the 3rd thus reducing the clutter in all the dropdowns.

I found the key to making it all function is to reference each list in the others as a lookup column. I'll stop there, cause this is probably enough of a hint to get creative juices flowing, and anything further would require screenshots for clarity and I'm not ready to go to that extent.

Hope it helps somebody. I have found this forum to be very helpful, and am trying to give back a bit to the community.

/Robin


Robin

Need to Add Record Create Date while loading a file to Table

Hi All,

Trying to load an Excel file. I need to add REcord create date ( Sysdate )  to the table when the file gets loaded into table. How do I do this using SSIS.

Regards,

James.


Reply:

1st a heads-up:

Your thread type is not a question/issue;

Sysdate is easily obtained from one of the package System type of the (canned) variables, namely the System::StartTime,

to add it you can use a Derived Column Transformation with the <Add as new Column> setting.


Arthur My Blog


------------------------------------
Reply:

Hi, the easiest way is to add a column to the table with the default value to getdate() - this way you don't need to map it in SSIS. Check this article:

http://sqlserverplanet.com/ddl/add-column-default-value

Another possible solution could be to add a column in SSIS (derived column) and set it to system variable StartTime.

David.


------------------------------------
Reply:

Thanks Arthur....you solution woked.

Is there any other way to ask which is neither a question/issue.


------------------------------------
Reply:
Twitter, or at my blog http://geekswithblogs.net/Compudicted/contact.aspx

Arthur My Blog


------------------------------------

Error creating share on Windows 7 Enterprise

When I attempt to share a folder, I get an error saying that I cannot share the folder.  When I use advanced, I get a more specific reason.  "There are no more endpoints available from the endpoint mapper."

I don't know what that means, but, i used to be able to create shares.

I noticed the problem after adjusting the size of my C and E partitions on my hard drive.

First I shrunk the volume for C.  I wanted to make E bigger, but in the infinite wisdom of the OS developers, that was not an option.  So, I needed to move the contents of E to a directory on C.  This remved my shares to the folders that were on E.  Not sure why the shares could not be moved, since they have nothing to do with a physical location.  But, then I removed the E partition.  Then, I created an E partion that was the size I wanted.  Then I moved the backed up files and folders back.  I attempted to create my share.

I got the error: "There are no more endpoints available from the endpoint mapper."

Before this, I did not know what an endpoint mapper is.  I still don't.  I am a software engineer with 15 years of experience.  I am very good at using Google and Bing to find things I don't know.  I have not fund a solution to this.

Also, using the trouble shooter did not find any issues with sharing.

I cannot start the Windows Firewall.  That was something I had come across.  but, I have never used the Windows firewall and have created shares before.


Reply:

In an elevated command prompt, run sfc /scannow


------------------------------------

MDS Web Services with SSL giving "Could not find a base address..." error

All,

Basic install with IIS cert rooted to the FQN of the site.

Set httpGetEnabled="false" httpsGetEnabled="true"

Get the error:

Could not find a base address that matches scheme http for the endpoint with
binding WSHttpBinding. Registered base address schemes are [https].

This error is received on the production environment as well - same configuration

using VS 2010, IE, or fro IIS admin, get the same result...

So... we're finding the resources - just not getting past the security.

WEB GUI works fine with the SSL: Non-SSL fails appropriately.

any ideas?

Tony


Richard A. "Tony" Eckel Rochester, NY

Consumer Preview Download ISOs Available - Anticipation...

The Consumer Preview is now available.

6 gigabytes of fresh Windows 8 data now available in ISO format.

http://windows.microsoft.com/en-US/windows-8/iso

Anticipation...

Next step here:  Set up some VMware virtual machines and install...  I'm excited to see what's changed from the DP.  It's going to be a fun day!

-Noel


Detailed how-to in my new eBook: Configure The Windows 7 "To Work" Options


Reply:
Glad to hear you are downloading and testing!

------------------------------------
Reply:

I'm in the same boat. So close. You are so much faster than I am... I'm only downloading at ~400KB a sec. 4 bonded T1's here at work (yea, I know....).

Twiddling fingers is making a comeback!


Dustin Harper dharper@mstechpages.com http://www.mstechpages.com --- Windows Help and Support Page


------------------------------------
Reply:

On Wed, 29 Feb 2012 17:15:09 +0000, Dustin Harper wrote:

I'm in the same boat. So close. You are so much faster than I am... I'm only downloading at ~400KB a sec. 4 bonded T1's here at work (yea, I know....).

If you've got access to MSDN you should download from there. They are using
the Akami servers and download manager. Much faster than downloading from
the public site.


Paul Adare
MVP - Forefront Identity Manager
http://www.identit.ca
Hackers have kernel knowledge.


------------------------------------
Reply:

Well, I've got x64 installed on a VMware 8.0.2 virtual machine.  It still requires the VM to be set up as a Windows 7 x64 system, OS to be installed later, then just install from the ISO.  A very smooth process so far. 

Next step, VMware Tools...

Notice that there's no "Start" button at all...

 

-Noel


Detailed how-to in my new eBook: Configure The Windows 7 "To Work" Options

  • Edited by Noel Carboni Wednesday, February 29, 2012 7:43 PM

------------------------------------

Best way to convince an Admin that he should cleanup unused Group Policies and Consolidate if possible.

I have an Admin who doesn't think it is necessary to cleanup group policies that are no longer used.  Personally I like my house clean and I find that it is much easier to administer a domain if there aren't a bunch of policies that don't do anything.  Also it bugs me when the same policy is replicated over and over for no reason.

A good example is our WSUS policy.  For us we separate them into groups, 1 through 4.  The groups have different meaning.

Group 1 - Test Systems
Group 2 - Systems that are not redundant
Group 3 and Group 4 -  Redundant systems split into the two groups.

So really we only need 4 group policies to cover all systems, create groups with corresponding names and drop the servers into them.

For the life of me I don't know why this is an argument but it's about to be so I'm looking for ammo. :)

Thoughts are much appreciated.  Sites and sources even more so.

Thank you


David Jenkins


Reply:

So far this has good info.

http://technet.microsoft.com/en-us/library/cc779168(v=ws.10).aspx

  • (Paraphrased) Expediting the users startup and logon because they do not have to process the group policy.


David Jenkins


------------------------------------
Reply:

That's a good link, which is titled, "Best Practices for Group Policies."

Curious, you posted this as a discussion, or are you asking a question to get a specific answer to arm yourself with data against your colleague?

Or is s/he our boss? Does s/he know you're posting this?

.

As for other reasons, it's just good housekeeping, as you said. Why would a client need to process a policy that its not targeted for?

Also, it depends on how your GPOs are configured. Are they all at the domain level with filtering, permissions, etc, controlling which machines they hit, or are they linked to OUs? OU's is the better method, because I like to minimize filtering and altering permissions.

Any unused GPO can be unlinked by deleting it from any level, and they will simply go into the list of GPOs, whioch can be used again later or deleted if out of scope.

For futher GPOs specifics, I would recommend posting this to the GPO forum.

.

Maybe we can ask one of the AD moderators monitoring this thread to move it to the GPO forum for you:
http://social.technet.microsoft.com/Forums/en-US/winserverGP/threads

.

Ace


Ace Fekay
MVP, MCT, MCITP Enterprise Administrator, MCTS Windows 2008 & Exchange 2007 & Exchange 2010, Exchange 2010 Enterprise Administrator, MCSE & MCSA 2003/2000, MCSA Messaging 2003
Microsoft Certified Trainer
Microsoft MVP - Directory Services
Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php

This posting is provided AS-IS with no warranties or guarantees and confers no rights.

FaceBook Twitter LinkedIn


------------------------------------
Reply:

Apart from the housekeeping there is one more reason, prior to windows 2008, when you new create group policy object an additional folder named adm is created for each GPO in the sysvol. Each folder contains the default adm template which is of 4MB size consider 100 GPO means 400MB space is gone and this can be too reason for cleaning unused GPO to save the space and also avoid sysvol bloat. 


Awinish Vishwakarma - MVP-DS

My Blog: awinish.wordpress.com

Disclaimer This posting is provided AS-IS with no warranties/guarantees and confers no rights.


------------------------------------
Reply:

Ace,

It's posted as a discussion because that's exactly what it is.  There is no 'answer' that is 100% right.  It's a colleague and I really wouldn't care if it were my boss.  I'm not going to get fired for research and I haven't said anything negative about anyone.  I think it's in really poor taste for you to infer such a thing.


David Jenkins


------------------------------------
Reply:

Ace,

It's posted as a discussion because that's exactly what it is.  There is no 'answer' that is 100% right.  It's a colleague and I really wouldn't care if it were my boss.  I'm not going to get fired for research and I haven't said anything negative about anyone.  I think it's in really poor taste for you to infer such a thing.


David Jenkins

David,

I was not inferring or implying anything. I was just asking as part of this discussion. I thought it was a valid question. I've seen posts in the past, not necessarily in this forum, but in the old MS newsgroups and in LinkedIn discussions where a poster posted about a colleague and the colleague got wind of it, read it and posted back with negative comments, and apparently based on the responses that ensued, resulted in a rift between them, even though the original poster's post was completely benign. I guess it's the colleague's perception and their reaction when he or she read it, good or bad, possibly looking at it as someone talking behind their back.

Like I said, I was just asking, as part of this discussion. No harm was meant, nor anything implied or inferred.

.

As for the technical reasons, I hope the info we've offered helped.

.


Ace Fekay
MVP, MCT, MCITP Enterprise Administrator, MCTS Windows 2008 & Exchange 2007 & Exchange 2010, Exchange 2010 Enterprise Administrator, MCSE & MCSA 2003/2000, MCSA Messaging 2003
Microsoft Certified Trainer
Microsoft MVP - Directory Services
Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php

This posting is provided AS-IS with no warranties or guarantees and confers no rights.

FaceBookTwitterLinkedIn



------------------------------------
Reply:

No worries. 

I just wanted to ensure, now that you mentioned it, that my post was not an attempt to slight or embarrass anyone. 

I've had a ton of admins have reasons for simply not touching anything and I totally understand.  I'm sure you've had to question yourself as to if it was wise to remove an old group or account, in this case policy, without thoroughly checking on the downsides of the removal.  In alot of instances admins just don't have the time to do the investigating so they just don't bother.

Thanks for responding.


David Jenkins


------------------------------------
Reply:

I'm sure it wasn't meant to slight or embarass, and I look at it that way, but perception and interpretation of the person reading it, would be the main factor.

As for why they would or would not do their jobs keeping the infrastructure optimized, is a good question. I'm sure they have their own reasons, and I'm sure they have their own justifiable reasons, whether technical or otherwise.

Good luck!


Ace Fekay
MVP, MCT, MCITP Enterprise Administrator, MCTS Windows 2008 & Exchange 2007 & Exchange 2010, Exchange 2010 Enterprise Administrator, MCSE & MCSA 2003/2000, MCSA Messaging 2003
Microsoft Certified Trainer
Microsoft MVP - Directory Services
Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php

This posting is provided AS-IS with no warranties or guarantees and confers no rights.

FaceBook Twitter LinkedIn


------------------------------------
Reply:

Hello,

as you are right there is no correct or incorrect a discussion about how to is a good option. But i think that the GPO forum is the better place to start this http://social.technet.microsoft.com/Forums/en/winserverGP/threads


Best regards

Meinolf Weber
MVP, MCP, MCTS
Microsoft MVP - Directory Services
My Blog: http://msmvps.com/blogs/mweber/

Disclaimer: This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.


------------------------------------
Reply:
I agree, sometimes it's hard to find the right forum.

David Jenkins


------------------------------------

USB whitelist

Is there anyway to lock down USB ports on XP so that only approved (encrypted) pen drives can be used to obtain data.

For example if user A has 1) a company approved encrypted USB thumb drive, and also 2) a personal non encrypted USB thumb drive, if they plugin 1) it allows them to save data to this drive, however if they plugin 2) it wont let them save data to it.

How can this be enforced centrally.


Reply:

------------------------------------

Benefits of Office 2010 (from IT POV)

I'm curious to hear from any Sys Admins that have upgraded to Office 2010 and any advantages or disadvantages you saw in the new version.

We are still on Office 2007 and arent having any issues we need to solve and I cannot find any features in 2010 that motivate me to spending the time to upgrade the company. So I'm wondering how the rest of you are fairing.


Lossless Audio Addict

Setup is Split Across Multiple CDs

Setup is Split Across Multiple CDs Lately I've seen a bunch of people hitting installation errors that have to do with the fact th...