Saturday, April 2, 2022

Database Mail Setup

Database Mail Setup

If you have trouble sending messages using DatabaseMail and are getting "13 error 2005-12-22 13:49:17.053 The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 1 (2005-12-22T13:49:16). Exception Message: Could not connect to mail server. (An established connection was aborted by the software in your host m 3640 5 NULL 2005-12-22 13:49:17.053 sa"  This output came from this query SELECT * FROM msdb.dbo.sysmail_event_log

Take a look at your virus scan properties -- Prevent mass mailing worms from sending mail. I had to add DatabaseMail90.exe to the exclude list before it would let DatabaseMail mail through.

 

Conditional Create

Hi!

I made a create script of an SSAS Cube. Can I alter the script so that it only creates the cube, if the cube doesn't exist?

Default Member for Hierarchy in AS2005

In short, I'd like to try and specifiy a default member for one of my dimensions (excuse me if I use the wrong terminology as I'm still new to AS and MDX). 

Is it possible to set a default member on a hierarchy itself?  The only place I see the Default Member property is for each attribute available in the dimension, and not anywhere within the hierarchy itself.

I hope I'm making sense :)


Numerical Dimension

I have a dimension representing the Age of a client. I would like to create a Named Set filtering all clients whose age is upper 80.

Something like this : [Dimension].[Age].Value > 80

But this kind of expression is not valid, Is there a way to do that ?

How Can I generate a rdl file from a mdx query in C#.net?

Is there some methods to generate the rdl file or reporting html page 

from  a given mdx query language
For instance, I have a mdx query:
select dim1.a on rows, dim2.b on columns from cub1

How can I generate the corresponding rdl file to generate report html page with C#.net??

I know that one can use xmltextwriter to write the rdl file but it is a difficult work.Is there some internal class in C#.net have some methods to generate the rdl file?

Thanks

Reminder - please indicate if a reply has answered your question.

How do I indicate a post answered my question?

While reading a reply to your question you will notice a button with this Icon:  . Clicking on it will mark the post as the answer to your question


Thanks!
Dan

Offiline cube creation

Hello-

   I'm attempting to create an offline cube that is importable to an excel doc as a pivottable, but I've yet to find any way to do this through the SQL 2005 AS interface.

    Does anyone know of a way to create such a cube through Visual Studio or the SQL management console, or is it strictly an MDX task?

    Thanks,

 Tristan

How can I use SemiAdditive aggregations on a dimension other than Measures?

Our cubes include a dimension that needs semiadditive aggregation (inventory calculation) on some of its members. As I read the documentation (what there is of it), it seems the only way to do this is to make my dimension a parent-child dimension of type "accounts".

Unfortunately, we have several levels in this dimension, each of which needs the functionality.  SSAS tell us that we can have only one attribute in the entire dimension of type Account and only one member of type  AccountType.  So, that eliminates the all by one level in the dimension.

we tried configuring the dimension as a parent child dimension.  that seemed to work until we tried executing MDX queries in which the accounts dimensioni was crossjoined with another (any other) dimension in which case all the values except inventory were returned as Null.

I'm not sure if we're not using this properly, or this is a design feature or a bug. 

Advise Pls?

 

 

 

Creating a dummy dimension level for storing calculated members

[SQL Server 2000 question]

I have a dimension that has one hierarchical (grouping) level (call it "Group 1") and a leaf level.  I want to create a member in the grouping level that will only be used to store calculated members.  The obvious way I found to do this was to create a dummy record in the underlying table with both the group 1 attribute and the leaf attribute having the same value.  I could then hide the dummy member at the leaf level by setting the "Hide member if" property of the leaf level within the dimension to "Parent's Name".  I could then create calculated members and set their "parent" property to be the group 1 name of the dummy record.  The problem with this is that when I then browse the cube data, I cannot drill down on the dummy group 1 member, despite having create a child calculated member.  Can any one help?

Recognition of Cube in Catalog

We have an application that uses MSADOMD classes to access Cubes in a catalog. There are total of 4 Cubes on the catalog and we coudl see all the 4 cubes when accessed through the sample MDX application. When when we access through our application which uses MSADOMD, only three Cubes are being shown.

The following method call

CubeDefs cubes =m_catalog.getCubeDefs() ;
    m_Cube = cubes.getItem(new Variant(sInCubeName)) ;

returns the following error:

Item cannot be found in the collection corresponding to the requested name or ordinal

We checked the User roles setting from Analysis Services and found that there is no restriction based on User role to any of these Cubes.

What could be the reason?

 

 

Is the STDEVP Function in SSAS same as STDEVP in MS Excel?

Hi,

I am struggling with implementing a Calculated Member to perform the same STDEVP() as in MS Excel, over a FACT table.

I have tried using the STDEVP() provided in the Cube designer Calculation tab, and have failed to get

STDEVP( «Set»[, «Numeric Expression»] )

working.

My FACT table looks like this
  Time_Key
  Customer_Key
  Transacted_Amount
  ...

I supposed <<Numeric Expression>> should be replaced with [Measures].[Transacted Amount] (which is having SUM as aggregation).

My Time Dimension is the standard one with Year/Semester/Quarter/Month/Day.

What should go into <<Set>> for the STDEVP(), if I need the results to be sliced by different time levels and by different Customer_Key?

Problem with Dimension

Here is the scenario.

One of the attribute of my Dimension encounters error when I try to browse
it using the Dimension Browser. The data of this attribute comes from a derive column. One of the parent Key of the attribute has blank value, and when this will be click. The error comes out.

Error Message:
The server sent an unrecognizable response.
Additional Information
 hexadecimal value 0x0F, is an invalid character. Line1 (System.Xml)

The work around I've done is to delete the rows from the table that produces blank value into my derive column. The solutions works fine, the parent key from my dimension with blank value is gone.

But there were other parent key that when click shows the same error.
When I browsed the derived column. there were values such as Ascii Character.

What should I do to solve this problem? Do I have to delete the rows with Invalid characters or are there other work around for this instead of deleting the rows?

thanks,
L@rs

Incremental Data Load/Update For MOLAP Partitions

Consider the following scenario -

We have a Cube built on top of a Partitioned Fact Table (SS 2005) with about 100 partitions. We have 100 corresponding partitions in cube (with same partition ranges) built using "QueryBinding".

Now the nature of the data in Fact table is such that we can have Updates and Inserts happen accross all the 100 Partitions in the Fact table (with Last_Modified_Date column indentifying changed/inserted records)

In order to sych up with the changes in underlying fact table, we are planing to implement incremental updates using "Out-Of-Line" bindings.

We have not had a chance to prototype this yet, so the question (#1) is will the use of Out-Of-Line bindings such as "Select X, Y, Z From Fact_Table Where {Partiton Definintion} And {Last_Modified_Date Between Start_Time and End_Time}" work for newly INSERTED and/Or UPDATED fact records or will it result in inflated numbers corresponding to UPDATED Facts - how do we get around this without re-reading entire contents of partition.


Question (#2), can we use the proactive caching setup of MOLAP to get around this? An illustative example would be great.

Where is the value of variables in the Polling Query Stored? - is the cache persisted with the base data at some point?

Thanks in advance.

New Service Broker blog

Remus Rusanu has started his own blog titled "Service Broker Conversations." You can access it here. His first example shows how to build a pub-sub service using Service Broker in SQL Server 2005.

Unexpected Results with "Rolling 12 Month Sales" Calculated Member

I'll try my best to explain.

I have a Calculated Member in my 2005 cube [2004 Sales], and it's defined as:

([Measures].[Invoice],[Invoice Date].[Invoice Year].&[2004])

Within the Cube's Browser in BI Dev Studio, I can simply add [2004 Sales] and nothing else and it appears to return the correct value.

Now I have another Calculated Member for [Rolling 12 Month Sales] and it's defined as:

SUM( LASTPERIODS (12 , [Invoice Date].[Invoice Month]), [Measures].[Invoice] )

When I simply add this calculated member and only this calculated member, the result is the same as simply adding [Measures].[Invoice] to the browser.  When I add something from the Invoice Date dimension, the [Rolling 12 Month Sales] then takes on different values, but I don't think they're correct.  Or I could be interpreting the results wrong?

In any event, I would like to be able to simply add the [Rolling 12 Month Sales] calculated member and get the right results without the need to bring any dimensions into play, if possible.  I hope that made sense.

This calculated member is supposed to mimic in SQL (when executed in December 2005):

SELECT SUM(Invoice)
FROM Sales S
INNER JOIN [Invoice Date] IDT
 on S.[Invoice Date_key] = IDT.[Invoice Date_key]
WHERE
 [Invoice Date] >= '2004-12-01' AND [Invoice Date] <= '2004-11'30'


SSMS presentation of IS Packages...

Please update stored procedures "msdb.dbo.sp_dts_listpackages" and "msdb.dbo.sp_dts_listfolders" with an "ORDER BY name" clause.  It's causing SSMS to present IS packages and related folders in a random order.

Thanks.

ANN: Version 2 of Reporting Services Scripter now available

Download Version 2 of Reporting Services Scripter

An updated version of Reporting Services Scripter is now available (Build 2.0.0.0). This major release adds full support for SQL Server Reporting Services 2005 both as a script source and deployment target (from both SQL 2005 and SQL 2000). It also adds a resizable user interface (a popular request!) and a new operating mode - Transfer mode. In this mode, scripts are still generated however they are automatically deployed to the selected target server. Please see the readme for full details of changes in this release. And yes, it's still completely free!

LastMonth and LastQuarter

Hi,

I'm trying to create two timerelated calculted members:

LM : LastMonth - if current month is November I would like to have October
LQ : LastQuarter - if current month is November (Quarter4) I would like to have Quarter3

Any good hints somebody? ParallelPeriod ?

Time Dimension: LastNonEmpty or Max?

I have a simple requirement to meet:

There are two FKs/Relationships from DIM_Customer to DIM_Time:
1. Created_Time_Key (indicating when the customer account was created)
2. Referred_by_Time_Key (indicating when the customer was referred by another customer, can be NULL if this is not a referred customer).

The requirement is to retrieve the Last Referral Date at a multi-level Customer Groups. Customer Groups is implemented by a Hierarchy in DIM_Customer.

I have managed to retrieve the MAX Referred_by_Time_Key at different customer group hierarchy level.  However, for reporting front-end, what I need is to display is the Last_Referred_by_Date.  Conceptually, it is so easy to understand that I just need to lookup that in DIM_Time and display the Date accordingly.  But how is this implemented in Analysis Services?

I have tried to add a Measure from DIM_Time.Date source column and apply Max or LastNonEmpty with it.  But I keep getting a looping error and the project cannot be deployed properly.

I have also tried to add the DIM_Time.Date as an attribute related to DIM_Customer.Referred_by_Time_Key.  However, I am not able to then perform aggregation since it is a Dimension attribute.

I have a feeling that this is supposed to be simple to implement and I am on the wrong track of things.

Any advice please?

thanks.

Deploy a Analysis Project Error: File already exists

Dear experts,

Whenever I want to deploy a analysis project, the Deployment always failed with the following error:

"The following system error occurred:  File already exists. "

This error always shows even I only create a new data source and then deploy it.

What's going on..

Automated Processing of cubes and dimensions

Automated Processing of cubes and dimensions

Hi everyone,

I am looking for some advice on how to automate the creation and processing of partitions (by day) in SSAS 2005.  I am considering going with SSIS and AMO scripts, however I have found debugging them to be problematic, and very slow.  They also stop responding when the job is cancelled, and the service needs to be restarted.  I liked the simplistic setup of Sql Agent & XML/A script tasks, however I'm not too sure how they are performance-wise, and how they could incorporate variables & configuration settings.

Any suggestions or examples on how everyone is doing this in real life would be great.  Project REAL seems to use XML/A, with a vb.net script passing the XMLA in a string variable back to an AS processing task.  It sounds rather convoluted if you could just use AMO in the script itself.

Also, Is there a way to encapsulate XML/A, MDX, or AMO in a stored procedure? 

Connecting to MSAS 2005 from SSIS 2005 64 bit

Hi all:

Are there any known issues while creating connections from Integration Services in the 64-bit environment or am I missing something ?

Cheers.

 

SQL2005 AS & MBS Axapta

Error: The connection string to repository needs to be specified in the 9.0 server properties (see <DSO>...</DSO> section in msmdsrv.ini file from Analysis Services 9.0).
compatibility?

Need help In Decision Trees

Hello ,

I am doing some analysis on Video Rental System so find the recency,frequency factor of the customers.Basically about the behaviour of the customers.

I have three tables FactVideoRental , DemVideo , DemCustomer. So want to make decision tree to see the results.But i am not able to go to depth of trees only up level 2.

If someone did any similar project kindly share your thoughts or you can email me on alirazvi@gmail.com

 

Regards

ali

test - ignore

test

code for the KPIs

Hi All

can anybody please tell me how to code KPI  and deploy tht on reprts

thanks in advance

rajnish

 

Transactions in NS API (Found a way to do it)

As the topic says I found a way to give notification services API the ability to do transactions.

This is accomplished by using DTC transactions. Given that the API just boils down to some SQL call and SQL is DTC aware it will take advantage of the current transaction.

Thanks

Wayne Sepega

"Promoting" Fields to Attributes

Hi,

I'm currently working through the Project Real documents... In the Analysis Services Technical Drilldown http://www.microsoft.com/technet/prodtechnol/sql/2005/realastd.mspx they describe the process of how to "promote" a field to an attribute.

The technical description is not very interesting for me. What really scares are the "guidelines" when to promote a field to an attribute. They talk about fields "never to be used in analysis". Well, if the fields are "useless", why are they in the database? If they are useful why will they never be used for "analysis"? What does "analysis" mean?

The concept of the UDM defines the UDM as the single source of information for all reporting and analysis in an organization. So why (to come back to Project Real) is an EAN code something to be excluded from the UDM? Because you'll never do "analysis" on that? But I'm sure that an EAN code is essential for lots of reports. So I really don't understand why the EAN code should be excluded from the cube.

So where is the problem? Does't Analysis Services 2005 deliver what it promisses? Didn't I understand the UDM correctly? Are the recommendations from Project Real still based on a very "conservative" (SQL 2000 minded) thinking? Or is the UDM nothing more than "marketing"?

Error Deploying SSAS Cube

I'm trying to deploy Adventure Works DW Cube and I'm getting the following Error:

Error -1056833523 : File system error: Error opening file; \\?\C:\Program\Microsoft SQL Server\MSSQL.1\OLAP\Data\Adventure Works DW Standard Edition.2.db\Adventure Works DW.0.cub\Fact Internet Sales 1.19.det.xml is not a disk file or file is not accessible.

The file changes sometimes when I try again butthe error is the same

 

//matbac

 Problem solved:

It was the Norman anti-virus that caused the problem.

Selecting the Current Date or yesterday or previous Year, month, quarter or prevous whatever level in the time dimension ?

Hello

Does there exist an intelligent time dimension which indicates eg the current date (, year, quarter, month, day), or the previous date (, year, month, quarter, day) ?

I could make a processing-date dimension which is based on the GetDate function of TSQL (sqlserver) ! because we daily do processing !?

Actually, we try to solve this issue in scorecard, maybe scorecard is intelligent enough to select the current or previous date in our time-dimension ?!

Kind regards

Problems when Oracle running on same machine

I was deploying my Report Model in Visual Studio but got an error that it couldnt connect to my Report Server while it was running.

When i typed the address of my ReportServer I got an error, then when I typed the name of the localhost  got an Oracle screen. So then I decided to turn off all the Oracle things running on the machine. Then when I tried to deploy again it worked and I could go to my ReportServer without problems.


Reply:
I assume you aren't trying to build a Report Model on top of Oracle, it just happens to be running. The only thing I can think is that Oracle is taking over the HTTP port configured for Reporting Services.

------------------------------------
Reply:
I think this is the reason as well, I just wanted to point this out to other users that they should look at this when they cannot connect to their ReportServer when Oracle is running. Is there a way that I can give preference to Microsoft instead of Oracle ?

------------------------------------
Reply:
I'm not very familiar with Oracle's HTTP stack. You don't mention which version of Oracle you are using but 10g comes with a modified version of Apache called OHS. I think there are configurations where you can tell it to listen on a port not managed by IIS.

------------------------------------

Sometimes I Get Errors When Processing Cube

Sometimes I will get errors when processing my AS2005 cube.  After some time passes, it processes successfully. 

I usually process the cube when there is new data available in the database, like this morning when I processed the cube essentially minutes after being told that there was new data available.  Would this have anything to do with it?

What if a user was connected to the cube within Excel and was in the middle of a query or constructing a report when I processed the cube?  Would that cause a failure in processing?

Unfortunately, I don't recall the errors I see.  I just know that after some time, I can process without any issues.

Finally, what are some best practices for processing a cube to account for scenarios that could cause failures (e.g. user connected to cube)?

Dimension Access for Roles in AS 2005

When creating a role in AS 2005 and specifying dimenison access (on the Dimensions tab of the role definition in BI Dev Studio), why do some dimensions have the option of "None" as access level while others only have "Read" and "Read/Write"?

For example, in looking at the Adventure Works sample solution for AS 2005, if you add a role and then go to the Dimensions tab of the role definition, you'll see that Account, Department, Organization, and Scenario only have "Read" and "Read/Write" as access options.  All the others have those two as well as "None".

At first, I thought that it was because of some unique setting for those four dimensions.  But I haven't found anything just by browsing their definitions.  Then I thought it was perhaps due to some unique usage of those four dimensions within the cube design.  They are all used with the Financial Reporting measure group, but so is the Date dimension and it has "None" as an access option.

Anyone have any ideas?  Probably something simple that I'm just overlooking, but I'd like to know why I can't set any of these to "None".

Thanks!

Dave Fackler


Reply:
Did you ever find an answer to this?  I'm having the same issue. 

------------------------------------

SSAS Dimension Access

Is there anyway to deny access to an entire dimension in AS 2005?  If I create a role, and go to the dimension tab, the only available options in the "Access" column is "Read" and "Read/Write".

I'm using Adventure Works DW and the BI Development Studio.

 

 

SQL Server Service Broker Admin Tool for SQL Server 2005 RTM

Back in July I released the (then) latest version of SsbAdmin. For you who don'r know what it is; it's a tool which allows you to graphically administer SQL Server Service Broker.

I have now uploaded a new version which is compiled against the released versions of SQL Server 2005 and .NET 2.0.

You can download it from here: [0].

After download, un-zip to some directory and read the README.doc file.

[0]: http://staff.develop.com/nielsb/code/ssbadmin.zip


Reply:
Do you plan to give out the source as well? I could give a shot at adding Remus' import/export service listing scripts to that.

------------------------------------
Reply:
Hey Rushi,

I actually do have a half-working implementation of the import/export service listing. I'll send you the source for that when I'm back home (I don't have that source here).

Niels

------------------------------------

Interesting Behavior of Service Broker Queues

Hello All:

I've been experimenting with the new SQL Server Service Broker, and I think I've discovered some interesting behavior.  Service Broker relies on "Queues" to store messages that need to be processed.  Service Broker operates by sending a message from one Queue (the INITIATOR Quque) to another Queue (the TARGET Queue).  A Queue can have an "Activation Stored Procedure" associated with it.  This procedure is what actually processes the messages in the Queue. 

The first behavior I obeserved related to the setting of a Queue's RETENTION parameter.  The RETENTION parameter indicates whether or not the Queue will retain a copy of the messages it receives.  By default, the parameter's value is "OFF" (meaning it will not retain messages).  In the Activation Stored Procedure of my TARGET Queue, I used "sp_send_dbmail" to send an e-mail message.  I wanted to capture the "conversation_handle" (a uniqueidentifier that identifies a particular message) and include it in the body of the e-mail.  I was unable to capture it, because the Queue's RETENTION parameter was "OFF".  When I tried to capture the conversation_handle from the INITIATOR queue (whose RETENTION parameter was "ON"), I was successful.  The moral of the story is you apparently need to have RETENTION = "ON" if you need to capture information from a Queue.

The second behavior I observed relates to the setting of a Queue's MAX_QUEUE_READERS setting.  This setting allows you to automatically invoke multiple instances of the Activation Stored Procedure.  As messages come into the Queue, the Queue creates an additional instance of the Activation Stored Procedure up to the number you specified in the MAX_QUEUE_READERS setting.  This allows parallel processing of messages in the Queue.  There is also a programming technique called a "RECEIVE LOOP" which is used for processing messages.  In the RECEIVE LOOP, you have a parameter called WAITFOR which tells the Queue whether it should stay on constant alert for new messages or whether it should time out after a specified number of seconds.

If you have the Queue wait indefinitely (by not specifying a TIMEOUT value in the WAITFOR statement) and you have invoked multiple copies of the Activation Stored Procedure, the procedure will remain in memory indefinitely.  Therefore, if you make a change to the code of the Activation Stored Procedure, the change will NOT be reflected in the Activation Stored Procedure until you change the STATUS of the Queue.  I had changed my procedure so that it would not send an e-mail, but the e-mails kept coming.  The e-mails did not stop until I executed an ALTER QUEUE statement.  I ran "ALTER QUEUE queue_name WITH STATUS = OFF;" and then I ran "ALTER QUEUE queue_name WITH STATUS = ON;"  After that, the changes were reflected in the procedure.

Be aware of this behavior as you design your Queues.


Reply:

"The moral of the story is you apparently need to have RETENTION = ON if you need to capture information from a Queue."

I wrong about this.  The reason I wasn't capturing the information the first time around was because I didn't alter the queue to reset the STATUS.  I have since been able to turn off RETENTION while having the ability to capture information from the queue.


------------------------------------
Reply:

Ken, some necessary corrections:

- conversation_handle identifies conversations endpoints, not messages. Initiator and target have distinct conversation_handles. Conversation_id is identifies conversations and is shared by initiator and target. A message is unique by conversation_handle AND message sequence number.

- RETENTION has nothing to do with the capability to 'capture' conversation handles. To obtain the the conversation handle on the initiator side one must get it from the output of BEGIN DIALOG. To obtain the conversation handle on the target side one must put the conversation_handle into the RECEIVE projection list. RETENTION is totally unrelated to this, it is intended for helping application implement compensation logic on failed dialogs. RETENTION keeps a copy of sent and received messages for the duration of the conversation. These messages are visible for the SELECT statement, but not for the RECEIVE statement.

- any stored proc that has an infinite loop, once launched, will stay in memory until killed (explicitly or implicitly) or until server shutdown. This is not specific to activation, is general SQL Server behavior. The only difference is that when this happens by invocation from a user connection  it is usually killed implicitly, by that connection being closed or by the batch being aborted. Turning the queue OFF will cause the infinite WAITFOR(RECEIVE...) in the procedure to error and thus exit the RECEIVE loop. One could just as easily KILL the procedure (lookup the SPID in sys.dm_broker_activated_tasks). Needless to say, activated procedures with infinite loops are a very bad practice. It leads to unnecessary consumtion of server resources (threads at the very least) when there are not messages to receive.

- altering the code of an stored proc does not affect the exisiting running instances of that stored proc. This behavior is general in SQL Server, not particular to activated stored procs. Only new invocations of the stored proc will detect the change.

I hope this clarifies some of the behavior you observed.
HTH,

~ Remus


------------------------------------
Reply:

Thanks for the clarifications.  I wasn't thinking of the RECEIVE LOOP as an infinite loop, since you don't see the timer on the Query Editor window keep running, but it obviously is one.  I also wasn't clear about the circumstances under which a stored procedure stays in memory, so thanks for that information as well.

Now I want to show you a code sample to clarify what I am doing.  Is this what you mean by "put the conversation_handle into the RECEIVE projection list"?

<code>

USE ServiceBrokerTest

GO

/*

DROP PROCEDURE usp_FTPService

*/

ALTER PROCEDURE usp_FTPService

AS

BEGIN

DECLARE @conversationHandle UNIQUEIDENTIFIER

DECLARE @messageBody NVARCHAR(MAX)

DECLARE @messageTypeName SYSNAME

DECLARE @eMailBody NVARCHAR(MAX)

WHILE (1 = 1)

BEGIN

BEGIN TRANSACTION

WAITFOR (

RECEIVE @messageTypeName=message_type_name,

@conversationHandle=conversation_handle,

@messageBody=message_body

FROM [FTPQueue])

-- If a message was received, process it, else skip

IF (@@rowcount <= 0)

BREAK;

-- Process ObjectFTPStartRequest message

IF @messageTypeName = N'ObjectFTPStartRequest'

BEGIN

SET @eMailBody = 'FTP job for Object conversation ' + CAST(@conversationHandle AS NVARCHAR(MAX)) + ' completed'

EXEC msdb.dbo.sp_send_dbmail @profile_name = 'Ken',

@recipients = 'username@domain.com',

@subject = 'FTP Service Response (ASP)',

@body = @eMailBody;

SEND ON CONVERSATION @conversationHandle

MESSAGE TYPE [LookupRaceFTPStartResponse]

(N'<ObjectFTPStartResponse>

FTP job for Object completed

</ObjectFTPStartResponse>')

END CONVERSATION @conversationHandle

END

COMMIT

END

COMMIT

END

</code>

If that's what you meant, then that's what I was doing.  The reason I was having a problem was solely due to the fact that the first version of the procedure was still running in memory. 


------------------------------------
Reply:

To put the conversation_handle into the projection list it simply means to have the conversation_handle in the list of columns you RECEIVE. You are doing this fine.

The code you posted needs two changes:
- the WAITFOR(RECEIVE ...) must have a timeout. Typically, the value should be a few seconds. The ideea is that the procedure should linger a bit, in hope a new message arrives and it can be processed. Too short interval and the procedure is gonna miss new messages, exiting and being re-launched, which is a somewhat expensive operation. Too long and your procedure is going to consume server resources w/o doing anything.
- You must add a TOP(1) clause to the RECEIVE. Otherwise you will receive multiple messages, but only process the last one.

HTH,

~ Remus


------------------------------------
Reply:

I don't think it's proper to say the TIMEOUT is required since the T-SQL language does not require a TIMEOUT with the WAITFOR clause.  If it were required I would get a syntax or runtime error upon execution of the code.

As for whether it's a good idea to use a TIMEOUT, here's what Roger Wolter (Microsoft's Group Program Manager for SQL Server Service Broker) says in his book "The Rational Guide to SQL Server 2005 Service Broker" (Page 56):

"A WAITFOR with no timeout means the RECEIVE will wait for a message no matter how long it takes.  Waiting for a message to appear on the queue is generally more efficient than polling the queue by using the RECEIVE command periodically.  The main exception to this would be a low priority queue that is only polled for messages periodically when higher priority queues are empty."

As for the use of the TOP clause, I quote Roger again (again on Page 56):

"Retreiving all the messages available utilizes fewer server resources than multiple RECEIVE commands, so you should avoid the TOP clause whenever possible.  The main reason to use the TOP clause is in stored procedures where receiving one message at a time into TSQL variables simplifies programming, or when there's a chance that a RECEIVE command will retrieve more messages than you want to process in a single transaction."


------------------------------------
Reply:

Indeed, the TIMEOUT is not syntactically required. But no TIMEOUT in activated stored proc loop practicaly turns it into an infinite loop and you'll run into the problems you encountered.

Assignment in RECEIVE projection list (e.g. @conversation_handle = conversation_hanlde)  requires a TOP(1). It is true that using the TOP clause is less efficient, but otherwise the procedure gets significantly more complex. You'd have to RECEIVE INTO @tableVariable, open a cursor over the table variable and scan this cursor. That is what the "simplifies programming" part of he's quote reffers to. You either use a simple T-SQL program (i.e. @variables), but pay the penalty of the TOP clause, or you use a more complex T-SQL program (i.e. @tablevariable and cursor). Roger himself mentions this, see the two examples of RECEIVE on page 57.
Note that if 99% of your RECEIVEs would return only one message anyway (as is often the case), then the TOP is not expensive at all.

See the Receive Loop example on page 63, you'll see that Roger uses both a TOP(1) clause and a 5s TIMEOUT.

HTH,
~ Remus


------------------------------------

CREATE SUBCUBE or FILETR function

Hi All,
Some times it possible to have same result in MDX either by using CREATE SUBCUBE or by using FILETR function. Which method is better?

Thanx a ton for any advice :)

Rakesh

 

 

The tree hierarchy interface in cube browser

Hi,

  When browsing cube in SQL Server Management Studio, there is a tree hierarchy which lists measures and dimensions at left side.

  How could I embed this tree hierarchy interface to my application?

thanks,

New release of SqlClrProject

I thought I better post before this year ends. :-)

Anyway, you may know that I have developed a MSBUILD task dll for deployment of SQLCLR assemblies. I also created some templates for use from with Visual Studio 2005, the SQLCLRProject.

I have re-created this as an Add-In for VS, so you now have the ability to automatically deploy from inside of VS as well as debug! I have also created a GUI front-end for those who don't use VS which allows deploying of assemblies using this front-end (setting properties etc). So what exactly does all this do:

The SqlClrProject is a project type for Visual Studio 2005 for creation and deployment of assemblies to SQL Server 2005. It consists of templates for both Visual C# (C#) and Visual Basic (VB) with skeleton code for creation of CLR methods to be used as stored procedures, User Defined Functions (UDF's), triggers, User Defined Types (UDT's) and User Defined Aggregates (UDA's). Some of the features are:

  • Ability to deploy the assembly to SQL Server and create the SQL Server methods, all from inside the Visual Studio IDE.
  • Automatic creation of T-SQL deployment scripts.
  • Automatic creation of T-SQL DML scripts for testing of the created objects.
  • Ability to alter an assembly and only deploy newly added methods from the assembly.
  • Ability to create objects in a non default schema "schemaname.objectname".
  • Debug facilities from inside VS.

For you who have used the SQL Server Project project type from VS to do this, you may wonder what the differences are. Things that the SqlClrProject allows you to do, which the VS SQL Server project can't do:

  • Automatic creation of T-SQL deployment scripts.
  • Automatic creation of T-SQL DML scripts for testing of the created objects.
  • Ability to alter an assembly and only deploy newly added methods from the assembly.
  • Ability to create objects in a non default schema "schemaname.objectname".

Anyway, if you are interested you can download all needed from here: [0].

Un-zip the file and read the README.txt file in the root directory.

Comments are as usual very welcome!!!

Finally I wish you all a very Happy New Year!!!

Niels

[0]: http://staff.develop.com/nielsb/code/DeployPropAndAddIn.zip

Welcome everyone

We wanted to welcome you to this new forum and encourage you to post questions and/or post answers.

Reply:

Yo Janey,

Having fun with SQL Server 2005 and Express <not>! Pulled out most of my hair, now down to fingernails. Have the developers call me and I'll be happy to advise them on design and help threads.

Luv

Dad


------------------------------------

MDX Query in SSIS

MDX Query in SSIS

Is it possible to run an MDX query against a cube and then insert the value into a sql 2K database?  It was possible to do it in DTS 2000 using an OLE DB connection and then a Data Transformation task.  We've been trying a similar configuration in SSIS but it seems to only take SQL queries.  How might we go about setting up a MDX query?

Writeback with currency conversion ?

Does anyone know what support is provided in SSAS 2005 on currency conversion based cube updates ?

I have put together a simple example on the AdventureWorksDW DB.

In my DSV, I have DimTime, DimCurrency and DimCustomer as dimension tables plus FactCurrencyRate and FactInternetSales as fact tables.

I then create a simple sample cube based on these 5 tables.

I now have my [CurrencyTest] cube made of 2 measure groups : "Fact Currency Rate" and "Fact Internet Sales".

I have setup the [Dim Currency] dimension to be typed as CURRENCY, the measure group "Fact Currency Rate" as Exchange Rate and added Currency Conversion BI to the cube (on-to-many relationship).

It all works fine as far as reporting is concerned. Changing currency works fine.

Now, I write-enable the Fact  Internet Sales measure group.

Updating the cube with no restriction on the "Reporting Currency" dimension works fine.

Now, when I try to perform an UPDATE CUBE mdx statement on a specific currency (on the [Reporting Currency] dimension ), I don't get any error. Though, the value is not modified at all (even afetr commit)

Maybe I should rather use a many-to-many type relationship with the curreny dimension...

Any suggestion on this ?

thx & rgds,

Francois

How do you support the Oracle CLOB Data Type?

I am using VS2005 Analysis Services to create a cube from an Oracle data source.

My problem is that 3 of the tables use a CLOB field.
So my "Data Source View" can only link to 25 of the 28 tables.

Is there a workaround so that I can include the 3 CLOB tables?

AS 2005 Using multiple Roles for a user

Hello,

I am setting up roles in the cube to restrict what the users can see. What I understand is that one user can in the AD belong to several groups. These groups are supposed to have a corresponding role in the cube.

the ideal situation would be that a combination of roles would mean an intersection of the dimension data. I have heard that in other tools you work as follows: Rmove all rights and then add on....

My situation in SSAS is as follows (one user belong to all three groups):

  Role                       AD group          dimension              Criteria

Market_North ; Market_North  ;  dim_Market  Only see North and members

Market_East ;  Market_East ;  dim_Market  only see East and members

Organization_Merchandize ;  Organization_Merchandize ;  dim_Org   only see Merchandize and members

 

The behaviour is that I see nothing or I see too much when my user belong to all AD groups/Roles (one ADgroup correspond to one Role). This is the fact when I view both dimensions. If I browse only dim_Market I see only North and East. If I browse only Organization I see nothing...

I thought that in order to ensure not to see too much I would deselect all members in  dim_Market in Role Organization_Merchandize   and vice versa but this does not work out.

- How do I go about this, is it at all possible?? I know that in 2000 you may only have one role defining what you may see otherwise you will see more than allowed.  My tests so far (in AS2005) indicate that you may only have one role...

 

Sincerely,

CreatedTimestamp doesn't have a value

Hi,

I'm playing around with AS2005 for the first time in a long time and this is just a quickie question.

Here's the XML for my data source:

<DataSource xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:dwd="http://schemas.microsoft.com/DataWarehouse/Designer/1.0" xsi:type="RelationalDataSource" dwd:design-time-name="c35e52ca-345e-44ae-9629-38f2dea36ea9" xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">

<ID>Adventure Works DW</ID>

<Name>Adventure Works DW</Name>

<CreatedTimestamp>0001-01-01T00:00:00Z</CreatedTimestamp>

<LastSchemaUpdate>0001-01-01T00:00:00Z</LastSchemaUpdate>

<ConnectionString>Provider=SQLNCLI.1;Data Source=cgojthomson;Integrated Security=SSPI;Initial Catalog=AdventureWorksDW</ConnectionString>

<ConnectionStringSecurity>Unchanged</ConnectionStringSecurity>

<ImpersonationInfo>

<ImpersonationMode>ImpersonateCurrentUser</ImpersonationMode>

<ImpersonationInfoSecurity>Unchanged</ImpersonationInfoSecurity>

</ImpersonationInfo>

<Timeout>PT0S</Timeout>

</DataSource>

Is there a reason that CreatedTimestamp and LastSchemaUpdate don't have the proper values in them? Do these nodes have any use within AS?

Probably lots more of these to come so bear with me :)

-Jamie

 

 

Time Analysis Dimension (Shell Dimension) Calculations not working in AS2005

Hello,

 

In AS 2000, we had created a "shell dimension" to calculate different time periods (YTD, MTD, previous MTD, Trailing 28 days). This dimension basically had several calculated members which did these time calculations. We would reference these members by [Period Calculations].[YTD] for example.

 

In AS 2005, is there an equvilent way to do this without breaking existing cubes and reports (which have mdx referencing this shell dimension)? 

Problem connecting to SSAS

I have installed the September CTP and am having problems connecting to Analysis Services.  The error message I'm getting is "No connection could be made because the target machine actively refused it (System)".  The log-in is set identically to what's set in Reporting Services, SQL Server 2005, etc.  Has anybody else come across this error?


Reply:
I have exactly the same problem and can't find what the problem it.  I can't connect using SQL Server Management Studio at all.  I tried turning off my virus scan and firewalls but nothing helped.

------------------------------------
Reply:
I have the same problem.

------------------------------------
Reply:
only i try to connect to RTM version of the Reporting Services.

anybody found a solution yet? i see a lot of posts with this problem.

------------------------------------

Two ways to view member values

Hi,

I'm working on a SSAS 2005 project where I migrate an EXCEL pivot table based on a flat file (coming from an Oracle source). The previous implementation had most of the members defined twice, first by description and code and second by code and description, where code is the key of each customer. In the flat file, for each level with the need for "dual view support", there are 2 columns, having the same data but in different format.

So, for example - just for illustration - someone could set the rows to be

Customer1 (1234)
Customer2 (4553)
Customer3 (3573)
etc

where, someone else

(1234) Customer1
(4553) Customer2
(3573) Customer3

As you might have guessed, this freaked me out, because the flat file has lots of redundant data and now it is about 120MB fat!

What I'm asking is, how can I implement such a feature on my cube? Is there a designated SSAS 2005 way to do it elegantly or do I have to define on my hierarchies each level that requires "dual view support" twice?

SSAS 2005 Dimension Browser Error (Input String not in Correct Format)

Hello everyone,

Another developer I'm working with is having trouble browsing dimensions (Double click the dimension in the solution exploerer, click on the browser tab).  The "browsing" area turns grey with an error message of

TITLE: Microsoft Visual Studio
------------------------------
 
Input string was not in a correct format.
 
------------------------------
BUTTONS:
 
OK
------------------------------

I believe we have narrowed it down to this particular user's machine.  I ensured they had the same permissions (we are both set up as an admin in SSAS), opened their solution on my machine (this worked), and opened the solution on my machine using "run as..." with his username (this also worked).

We have also tried this with different solutions and have the same results.  This problem seems very interesting since it is possible to use the cube browser to view the dimension hierarchies.

Has anyone else come across a similar problem?  Does anyone have any suggestions to check for to try to address the problem?

Thanks in advance for any help.

Regards,

Dan

Sql Express and One Care

For those of you that are using One Care and Sql Express, there is a problem.  You will receive an error message in the Windows log that reads:

FCB: : Removealternatestreams: Operating system error 6 (The handle is invalid) occurred while creating or opening file 'C;\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\mater.mdf"

This message is for: Master, Model, Temp, etc.....

OneCare blocks access to alternate NTFS streams.

 

Thanks for email Ogre...........

Cartridges

Hi,

is there any information about cardridges (the XSL files which "teach" SSAS how to access different databases than SQL Server)? The only information I found was that there is no information...

Thanks,

Drillthrough Security

Hi,

 

I have an AS2000 cube that pulls back "sensitive" information via its drillthrough.

 

I have 2 set of users:  Group 1 and Group 2.

 

I need Group 1 to be able to see everything when using the drillthrough, and I need group 2 to see everthing but 1 column.  This 1 column holds the sensitive data.  It is viable that the data is just masked, rather than being compltely removed.

 

For performance issues, it's important that I stick with 1 cube - I DONT WANT 2 CUBES...

 

Can anyone please tell me how I can achieve this please?

 

Thanks in advance.

 

Jon Derbyshire

MSOLAPADMINLib2.dll in 2005 ?

Hi,

i have a reference to MSOLAPADMINLib2.dll.
This dll is not supportet by AS2005.

What can i do to access the AS2005 through my application to build the cube ?

Updated Version of SqlClrProject

Back in December (26:th to be exact) I released a new version of the SqlClrProject and wrote about it here [0].

One drawback of the deployment functionality was that you could only deploy locally as the deployment process used the CREATE ASSEMBLY based on the location of the assembly; i.e. CREATE ASSEMBLY myasm FROM 'path_to_dll'.

Well, this is now fixed, as I have chenged the code so it now uses the binary representation of the assembly. You who know your CREATE ASSEMBLY stuff may think "what about dependent assemblies?" as dependent assemblies are not automatically deployed when using the binary representation. Well, the code handles that as well, it uses reflection to see what the dependent assemblies are and retrieves the binary representation of those as well during deployment.

So if you have installed the original SqlClrProject which was available from here [1], there is now an update here [2]. Download the update, un-zip and follow the readme.txt. The original download has now been updated as well, so if you haven't retrived it yet, go and get it [3].

Have Fun!!

Niels

[0]: http://staff.develop.com/nielsb/PermaLink,guid,200487e5-f2e2-449c-8a3f-ee28e6aa8735.aspx

[1]: http://staff.develop.com/nielsb/code/DeployPropAndAddIn.zip

[2]: http://staff.develop.com/nielsb/code/yukondeployupdate.zip

[3]: http://staff.develop.com/nielsb/code/DeployPropAndAddIn.zip

Deployment overwrites partitions

When I deploy to Analysis Services 2005 from BI Studio, it prompts me that 'Deployment will overwrite existing objects' and when they are deployed, it appears that certain partitions are lost.

How can I deploy while retaining existing data? 

Error in the encryption Library ("Full detail")

Hi Guys,

Here is the scenario,

Setup of Analysis Service:
Data Dir: I:\OLAP_DATA\

Data Storage of Cube Partition: Drives (C:\OLAP_DATA\,D:\OLAP_DATA\,E:\OLAP_DATA\,I:\OLAP_DATA\)

I created a backup of our OLAP Database with the configurations above. Deleted the OLAP Database and Restored the backup. I've changed the configuration during the restoration.

Below is the Configuration for the Restoration database

Setup of Analysis Service:
Data Dir: F:\OLAP_DATA\

Data storage of Cube Partition: Drive (F:\OLAP_Data, (default location))

I have a process that runs daily. After the restoration, the first process is working fine but the succeeding process failed. What I did was restore the backup and reprocess until the point of failure. The succeeding days process fails again. I investigated the Data Source property of the OLAP Database and found that the connection string was empty. I tried supplying the connection string and was fine and leave the other properties to its default value but when I click the OK. Error shows up.

Below is the Error:

"Error in the encryption library: Failed to encrypt sensitive data. Possibly the encryptioin key is inaccessible because of improper service account change.(Microsoft.Analysis)"

Need help or insight regarding this issue.

Regard,
L@rry

 

Add an adhoc record to the mdx result

I have the following mdx query

with
      member [Measures].[ParameterCaption] AS '[Company].[Branch].Currentmember.Member_Caption'
      member [Measures].[ParameterValue] AS '[Company].[Branch].Currentmember.Uniquename'
select 
    {[Measures].[ParameterCaption], [Measures].[ParameterValue]} on columns,
    {[Company].[Branch].ALLMEMBERS} on rows
from [Profit And Loss]

I would like to add one more ad hoc row to the result set, which is an adhoc member identifying All Branches. I don't want this member to appeared in the olap cube, but would like the user to view it in the result set. What should I do to archive this?

Thanks.

Encryption error processing cube with Visual Studio 2005 Analysis Services

Hi!

I am trying to process a cube in analysis services of visual studio
2005, with sql server 2005. The error is:

"Errors in the encryption library: Failed to encrypt sensitive data.
Possibly the encryption key is inaccessible because of improper service
account change."

Please, if anyone know how to solve this eror, please let me know...

Cristovao


Reply:
We seem to have this problem as well. Any answers to this one? Our datawarehouse was well and running for 4 months now until this weekend when we got this problem. Havent solved this yet.

------------------------------------
Reply:

Hi Cristavo

I also experienced the same error when I tried to configure the datasource property of my OLAP database. In the Data Source Properties dialog box, I've set the connection string and leave the security settings to its default value "ImpersonateServiceAccount" and the errors shows up when clicking the OK.

Larry 


------------------------------------

Error in the encryption library

 

Hi guys,

Im configuring the datasource property of my OLAP database using Management studio. I've set the connection string. and leave the other properties to its default values.

Below is the error message I got when I click Ok.

Error in the encryption library: Failed to encrypt sensitive data. Possibly the encryptioin key is inaccessible because of improper service account change.(Microsoft.Analysis)

Right now I can't process cubes because I can not configure the datasource property of my OLAP Database.

Please advise on what to do.

Regards,

Larry

 

 

Errors in the encryption library When Processing Cube

 

Hi!

I am trying to process a cube in analysis services of visual studio
2005, with sql server 2005. The error is:

"Errors in the encryption library: Failed to encrypt sensitive data.
Possibly the encryption key is inaccessible because of improper service
account change."

Please, if anyone know how to solve this erorr, please let me know...

Thanks,

Joseph

Setup is Split Across Multiple CDs

Setup is Split Across Multiple CDs Lately I've seen a bunch of people hitting installation errors that have to do with the fact th...